00:00:00.000 Started by upstream project "autotest-per-patch" build number 132728 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:07.312 The recommended git tool is: git 00:00:07.313 using credential 00000000-0000-0000-0000-000000000002 00:00:07.314 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:07.330 Fetching changes from the remote Git repository 00:00:07.333 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:07.347 Using shallow fetch with depth 1 00:00:07.347 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:07.347 > git --version # timeout=10 00:00:07.358 > git --version # 'git version 2.39.2' 00:00:07.358 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:07.371 Setting http proxy: proxy-dmz.intel.com:911 00:00:07.371 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:12.892 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:12.906 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:12.920 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:12.920 > git config core.sparsecheckout # timeout=10 00:00:12.933 > git read-tree -mu HEAD # timeout=10 00:00:12.950 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:12.979 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:12.979 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:13.109 [Pipeline] Start of Pipeline 00:00:13.120 [Pipeline] library 00:00:13.121 Loading library shm_lib@master 00:00:13.121 Library shm_lib@master is cached. Copying from home. 00:00:13.136 [Pipeline] node 00:04:10.210 Still waiting to schedule task 00:04:10.210 Waiting for next available executor on ‘vagrant-vm-host’ 00:21:46.196 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest 00:21:46.198 [Pipeline] { 00:21:46.210 [Pipeline] catchError 00:21:46.212 [Pipeline] { 00:21:46.289 [Pipeline] wrap 00:21:46.298 [Pipeline] { 00:21:46.308 [Pipeline] stage 00:21:46.310 [Pipeline] { (Prologue) 00:21:46.330 [Pipeline] echo 00:21:46.332 Node: VM-host-SM4 00:21:46.338 [Pipeline] cleanWs 00:21:46.347 [WS-CLEANUP] Deleting project workspace... 00:21:46.347 [WS-CLEANUP] Deferred wipeout is used... 00:21:46.353 [WS-CLEANUP] done 00:21:46.541 [Pipeline] setCustomBuildProperty 00:21:46.634 [Pipeline] httpRequest 00:21:47.048 [Pipeline] echo 00:21:47.050 Sorcerer 10.211.164.101 is alive 00:21:47.062 [Pipeline] retry 00:21:47.064 [Pipeline] { 00:21:47.078 [Pipeline] httpRequest 00:21:47.091 HttpMethod: GET 00:21:47.092 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:47.092 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:47.093 Response Code: HTTP/1.1 200 OK 00:21:47.093 Success: Status code 200 is in the accepted range: 200,404 00:21:47.094 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:47.231 [Pipeline] } 00:21:47.244 [Pipeline] // retry 00:21:47.251 [Pipeline] sh 00:21:47.532 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:47.548 [Pipeline] httpRequest 00:21:47.895 [Pipeline] echo 00:21:47.897 Sorcerer 10.211.164.101 is alive 00:21:47.906 [Pipeline] retry 00:21:47.908 [Pipeline] { 00:21:47.922 [Pipeline] httpRequest 00:21:47.926 HttpMethod: GET 00:21:47.927 URL: http://10.211.164.101/packages/spdk_88d8055fc58c63612d6b42b66e63dfbe96281bed.tar.gz 00:21:47.927 Sending request to url: http://10.211.164.101/packages/spdk_88d8055fc58c63612d6b42b66e63dfbe96281bed.tar.gz 00:21:47.929 Response Code: HTTP/1.1 200 OK 00:21:47.929 Success: Status code 200 is in the accepted range: 200,404 00:21:47.930 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_88d8055fc58c63612d6b42b66e63dfbe96281bed.tar.gz 00:21:50.204 [Pipeline] } 00:21:50.225 [Pipeline] // retry 00:21:50.233 [Pipeline] sh 00:21:50.514 + tar --no-same-owner -xf spdk_88d8055fc58c63612d6b42b66e63dfbe96281bed.tar.gz 00:21:53.059 [Pipeline] sh 00:21:53.340 + git -C spdk log --oneline -n5 00:21:53.340 88d8055fc nvme: add poll_group interrupt callback 00:21:53.340 e9db16374 nvme: add spdk_nvme_poll_group_get_fd_group() 00:21:53.340 cf089b398 thread: fd_group-based interrupts 00:21:53.340 8a4656bc1 thread: move interrupt allocation to a function 00:21:53.340 09908f908 util: add method for setting fd_group's wrapper 00:21:53.365 [Pipeline] writeFile 00:21:53.384 [Pipeline] sh 00:21:53.671 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:21:53.686 [Pipeline] sh 00:21:53.968 + cat autorun-spdk.conf 00:21:53.968 SPDK_RUN_FUNCTIONAL_TEST=1 00:21:53.968 SPDK_TEST_NVME=1 00:21:53.968 SPDK_TEST_FTL=1 00:21:53.968 SPDK_TEST_ISAL=1 00:21:53.968 SPDK_RUN_ASAN=1 00:21:53.968 SPDK_RUN_UBSAN=1 00:21:53.968 SPDK_TEST_XNVME=1 00:21:53.968 SPDK_TEST_NVME_FDP=1 00:21:53.968 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:21:53.976 RUN_NIGHTLY=0 00:21:53.978 [Pipeline] } 00:21:53.994 [Pipeline] // stage 00:21:54.011 [Pipeline] stage 00:21:54.014 [Pipeline] { (Run VM) 00:21:54.028 [Pipeline] sh 00:21:54.310 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:21:54.310 + echo 'Start stage prepare_nvme.sh' 00:21:54.310 Start stage prepare_nvme.sh 00:21:54.310 + [[ -n 9 ]] 00:21:54.310 + disk_prefix=ex9 00:21:54.310 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:21:54.310 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:21:54.310 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:21:54.310 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:21:54.310 ++ SPDK_TEST_NVME=1 00:21:54.310 ++ SPDK_TEST_FTL=1 00:21:54.310 ++ SPDK_TEST_ISAL=1 00:21:54.310 ++ SPDK_RUN_ASAN=1 00:21:54.310 ++ SPDK_RUN_UBSAN=1 00:21:54.310 ++ SPDK_TEST_XNVME=1 00:21:54.310 ++ SPDK_TEST_NVME_FDP=1 00:21:54.310 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:21:54.310 ++ RUN_NIGHTLY=0 00:21:54.310 + cd /var/jenkins/workspace/nvme-vg-autotest 00:21:54.310 + nvme_files=() 00:21:54.310 + declare -A nvme_files 00:21:54.310 + backend_dir=/var/lib/libvirt/images/backends 00:21:54.310 + nvme_files['nvme.img']=5G 00:21:54.310 + nvme_files['nvme-cmb.img']=5G 00:21:54.310 + nvme_files['nvme-multi0.img']=4G 00:21:54.310 + nvme_files['nvme-multi1.img']=4G 00:21:54.310 + nvme_files['nvme-multi2.img']=4G 00:21:54.310 + nvme_files['nvme-openstack.img']=8G 00:21:54.310 + nvme_files['nvme-zns.img']=5G 00:21:54.310 + (( SPDK_TEST_NVME_PMR == 1 )) 00:21:54.310 + (( SPDK_TEST_FTL == 1 )) 00:21:54.310 + nvme_files["nvme-ftl.img"]=6G 00:21:54.310 + (( SPDK_TEST_NVME_FDP == 1 )) 00:21:54.310 + nvme_files["nvme-fdp.img"]=1G 00:21:54.310 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:21:54.310 + for nvme in "${!nvme_files[@]}" 00:21:54.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:21:54.310 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:21:54.310 + for nvme in "${!nvme_files[@]}" 00:21:54.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-ftl.img -s 6G 00:21:54.310 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:21:54.310 + for nvme in "${!nvme_files[@]}" 00:21:54.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:21:54.620 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:21:54.620 + for nvme in "${!nvme_files[@]}" 00:21:54.620 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:21:54.878 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:21:54.878 + for nvme in "${!nvme_files[@]}" 00:21:54.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:21:54.878 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:21:54.878 + for nvme in "${!nvme_files[@]}" 00:21:54.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:21:54.878 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:21:54.878 + for nvme in "${!nvme_files[@]}" 00:21:54.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:21:55.137 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:21:55.137 + for nvme in "${!nvme_files[@]}" 00:21:55.137 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-fdp.img -s 1G 00:21:55.137 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:21:55.137 + for nvme in "${!nvme_files[@]}" 00:21:55.137 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:21:55.397 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:21:55.397 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:21:55.655 + echo 'End stage prepare_nvme.sh' 00:21:55.655 End stage prepare_nvme.sh 00:21:55.667 [Pipeline] sh 00:21:55.950 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:21:55.950 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex9-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:21:56.209 00:21:56.209 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:21:56.209 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:21:56.209 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:21:56.209 HELP=0 00:21:56.209 DRY_RUN=0 00:21:56.209 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,/var/lib/libvirt/images/backends/ex9-nvme-fdp.img, 00:21:56.209 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:21:56.209 NVME_AUTO_CREATE=0 00:21:56.209 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,, 00:21:56.209 NVME_CMB=,,,, 00:21:56.209 NVME_PMR=,,,, 00:21:56.209 NVME_ZNS=,,,, 00:21:56.209 NVME_MS=true,,,, 00:21:56.209 NVME_FDP=,,,on, 00:21:56.209 SPDK_VAGRANT_DISTRO=fedora39 00:21:56.209 SPDK_VAGRANT_VMCPU=10 00:21:56.209 SPDK_VAGRANT_VMRAM=12288 00:21:56.209 SPDK_VAGRANT_PROVIDER=libvirt 00:21:56.209 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:21:56.209 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:21:56.209 SPDK_OPENSTACK_NETWORK=0 00:21:56.209 VAGRANT_PACKAGE_BOX=0 00:21:56.209 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:21:56.209 FORCE_DISTRO=true 00:21:56.209 VAGRANT_BOX_VERSION= 00:21:56.209 EXTRA_VAGRANTFILES= 00:21:56.209 NIC_MODEL=e1000 00:21:56.209 00:21:56.209 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:21:56.209 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:21:59.572 Bringing machine 'default' up with 'libvirt' provider... 00:22:00.140 ==> default: Creating image (snapshot of base box volume). 00:22:00.140 ==> default: Creating domain with the following settings... 00:22:00.140 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733491012_b7d050117ae79ffc9448 00:22:00.140 ==> default: -- Domain type: kvm 00:22:00.140 ==> default: -- Cpus: 10 00:22:00.140 ==> default: -- Feature: acpi 00:22:00.140 ==> default: -- Feature: apic 00:22:00.140 ==> default: -- Feature: pae 00:22:00.140 ==> default: -- Memory: 12288M 00:22:00.140 ==> default: -- Memory Backing: hugepages: 00:22:00.140 ==> default: -- Management MAC: 00:22:00.140 ==> default: -- Loader: 00:22:00.140 ==> default: -- Nvram: 00:22:00.140 ==> default: -- Base box: spdk/fedora39 00:22:00.140 ==> default: -- Storage pool: default 00:22:00.140 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733491012_b7d050117ae79ffc9448.img (20G) 00:22:00.140 ==> default: -- Volume Cache: default 00:22:00.140 ==> default: -- Kernel: 00:22:00.140 ==> default: -- Initrd: 00:22:00.140 ==> default: -- Graphics Type: vnc 00:22:00.140 ==> default: -- Graphics Port: -1 00:22:00.140 ==> default: -- Graphics IP: 127.0.0.1 00:22:00.140 ==> default: -- Graphics Password: Not defined 00:22:00.140 ==> default: -- Video Type: cirrus 00:22:00.140 ==> default: -- Video VRAM: 9216 00:22:00.140 ==> default: -- Sound Type: 00:22:00.140 ==> default: -- Keymap: en-us 00:22:00.140 ==> default: -- TPM Path: 00:22:00.140 ==> default: -- INPUT: type=mouse, bus=ps2 00:22:00.140 ==> default: -- Command line args: 00:22:00.140 ==> default: -> value=-device, 00:22:00.140 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:22:00.140 ==> default: -> value=-drive, 00:22:00.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:22:00.140 ==> default: -> value=-device, 00:22:00.140 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:22:00.140 ==> default: -> value=-device, 00:22:00.140 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:22:00.140 ==> default: -> value=-drive, 00:22:00.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-1-drive0, 00:22:00.140 ==> default: -> value=-device, 00:22:00.140 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:00.140 ==> default: -> value=-device, 00:22:00.140 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:22:00.140 ==> default: -> value=-drive, 00:22:00.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:22:00.140 ==> default: -> value=-device, 00:22:00.140 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:00.140 ==> default: -> value=-drive, 00:22:00.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:22:00.140 ==> default: -> value=-device, 00:22:00.140 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:00.140 ==> default: -> value=-drive, 00:22:00.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:22:00.140 ==> default: -> value=-device, 00:22:00.140 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:00.140 ==> default: -> value=-device, 00:22:00.140 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:22:00.140 ==> default: -> value=-device, 00:22:00.140 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:22:00.140 ==> default: -> value=-drive, 00:22:00.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:22:00.140 ==> default: -> value=-device, 00:22:00.140 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:00.398 ==> default: Creating shared folders metadata... 00:22:00.399 ==> default: Starting domain. 00:22:02.301 ==> default: Waiting for domain to get an IP address... 00:22:20.501 ==> default: Waiting for SSH to become available... 00:22:20.501 ==> default: Configuring and enabling network interfaces... 00:22:24.817 default: SSH address: 192.168.121.176:22 00:22:24.817 default: SSH username: vagrant 00:22:24.817 default: SSH auth method: private key 00:22:27.350 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:22:37.322 ==> default: Mounting SSHFS shared folder... 00:22:37.887 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:22:37.887 ==> default: Checking Mount.. 00:22:39.340 ==> default: Folder Successfully Mounted! 00:22:39.340 ==> default: Running provisioner: file... 00:22:40.272 default: ~/.gitconfig => .gitconfig 00:22:40.837 00:22:40.837 SUCCESS! 00:22:40.837 00:22:40.837 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:22:40.837 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:22:40.837 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:22:40.837 00:22:40.845 [Pipeline] } 00:22:40.859 [Pipeline] // stage 00:22:40.867 [Pipeline] dir 00:22:40.868 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:22:40.869 [Pipeline] { 00:22:40.881 [Pipeline] catchError 00:22:40.882 [Pipeline] { 00:22:40.894 [Pipeline] sh 00:22:41.174 + vagrant ssh-config --host vagrant 00:22:41.174 + sed -ne /^Host/,$p 00:22:41.174 + tee ssh_conf 00:22:44.465 Host vagrant 00:22:44.465 HostName 192.168.121.176 00:22:44.465 User vagrant 00:22:44.465 Port 22 00:22:44.465 UserKnownHostsFile /dev/null 00:22:44.465 StrictHostKeyChecking no 00:22:44.465 PasswordAuthentication no 00:22:44.465 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:22:44.465 IdentitiesOnly yes 00:22:44.465 LogLevel FATAL 00:22:44.465 ForwardAgent yes 00:22:44.465 ForwardX11 yes 00:22:44.465 00:22:44.492 [Pipeline] withEnv 00:22:44.494 [Pipeline] { 00:22:44.510 [Pipeline] sh 00:22:44.789 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:22:44.789 source /etc/os-release 00:22:44.789 [[ -e /image.version ]] && img=$(< /image.version) 00:22:44.789 # Minimal, systemd-like check. 00:22:44.789 if [[ -e /.dockerenv ]]; then 00:22:44.789 # Clear garbage from the node's name: 00:22:44.789 # agt-er_autotest_547-896 -> autotest_547-896 00:22:44.789 # $HOSTNAME is the actual container id 00:22:44.789 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:22:44.789 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:22:44.789 # We can assume this is a mount from a host where container is running, 00:22:44.789 # so fetch its hostname to easily identify the target swarm worker. 00:22:44.789 container="$(< /etc/hostname) ($agent)" 00:22:44.789 else 00:22:44.789 # Fallback 00:22:44.789 container=$agent 00:22:44.789 fi 00:22:44.789 fi 00:22:44.789 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:22:44.789 00:22:45.059 [Pipeline] } 00:22:45.074 [Pipeline] // withEnv 00:22:45.084 [Pipeline] setCustomBuildProperty 00:22:45.099 [Pipeline] stage 00:22:45.102 [Pipeline] { (Tests) 00:22:45.120 [Pipeline] sh 00:22:45.401 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:22:45.675 [Pipeline] sh 00:22:45.962 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:22:46.240 [Pipeline] timeout 00:22:46.241 Timeout set to expire in 50 min 00:22:46.243 [Pipeline] { 00:22:46.259 [Pipeline] sh 00:22:46.558 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:22:47.126 HEAD is now at 88d8055fc nvme: add poll_group interrupt callback 00:22:47.138 [Pipeline] sh 00:22:47.422 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:22:47.698 [Pipeline] sh 00:22:47.984 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:22:48.262 [Pipeline] sh 00:22:48.548 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:22:48.807 ++ readlink -f spdk_repo 00:22:48.807 + DIR_ROOT=/home/vagrant/spdk_repo 00:22:48.807 + [[ -n /home/vagrant/spdk_repo ]] 00:22:48.807 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:22:48.807 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:22:48.807 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:22:48.807 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:22:48.807 + [[ -d /home/vagrant/spdk_repo/output ]] 00:22:48.807 + [[ nvme-vg-autotest == pkgdep-* ]] 00:22:48.807 + cd /home/vagrant/spdk_repo 00:22:48.807 + source /etc/os-release 00:22:48.807 ++ NAME='Fedora Linux' 00:22:48.807 ++ VERSION='39 (Cloud Edition)' 00:22:48.807 ++ ID=fedora 00:22:48.807 ++ VERSION_ID=39 00:22:48.807 ++ VERSION_CODENAME= 00:22:48.807 ++ PLATFORM_ID=platform:f39 00:22:48.807 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:22:48.807 ++ ANSI_COLOR='0;38;2;60;110;180' 00:22:48.807 ++ LOGO=fedora-logo-icon 00:22:48.807 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:22:48.807 ++ HOME_URL=https://fedoraproject.org/ 00:22:48.808 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:22:48.808 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:22:48.808 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:22:48.808 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:22:48.808 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:22:48.808 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:22:48.808 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:22:48.808 ++ SUPPORT_END=2024-11-12 00:22:48.808 ++ VARIANT='Cloud Edition' 00:22:48.808 ++ VARIANT_ID=cloud 00:22:48.808 + uname -a 00:22:48.808 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:22:48.808 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:22:49.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:49.636 Hugepages 00:22:49.636 node hugesize free / total 00:22:49.636 node0 1048576kB 0 / 0 00:22:49.636 node0 2048kB 0 / 0 00:22:49.636 00:22:49.636 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:49.636 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:22:49.636 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:22:49.636 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:22:49.636 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:22:49.636 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:22:49.636 + rm -f /tmp/spdk-ld-path 00:22:49.636 + source autorun-spdk.conf 00:22:49.636 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:49.636 ++ SPDK_TEST_NVME=1 00:22:49.636 ++ SPDK_TEST_FTL=1 00:22:49.636 ++ SPDK_TEST_ISAL=1 00:22:49.636 ++ SPDK_RUN_ASAN=1 00:22:49.636 ++ SPDK_RUN_UBSAN=1 00:22:49.636 ++ SPDK_TEST_XNVME=1 00:22:49.637 ++ SPDK_TEST_NVME_FDP=1 00:22:49.637 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:49.637 ++ RUN_NIGHTLY=0 00:22:49.637 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:22:49.637 + [[ -n '' ]] 00:22:49.637 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:22:49.637 + for M in /var/spdk/build-*-manifest.txt 00:22:49.637 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:22:49.637 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:49.637 + for M in /var/spdk/build-*-manifest.txt 00:22:49.637 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:22:49.637 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:49.637 + for M in /var/spdk/build-*-manifest.txt 00:22:49.637 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:22:49.637 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:49.936 ++ uname 00:22:49.936 + [[ Linux == \L\i\n\u\x ]] 00:22:49.936 + sudo dmesg -T 00:22:49.936 + sudo dmesg --clear 00:22:49.936 + dmesg_pid=5300 00:22:49.936 + sudo dmesg -Tw 00:22:49.936 + [[ Fedora Linux == FreeBSD ]] 00:22:49.936 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:49.936 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:49.936 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:22:49.936 + [[ -x /usr/src/fio-static/fio ]] 00:22:49.936 + export FIO_BIN=/usr/src/fio-static/fio 00:22:49.936 + FIO_BIN=/usr/src/fio-static/fio 00:22:49.936 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:22:49.936 + [[ ! -v VFIO_QEMU_BIN ]] 00:22:49.936 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:22:49.936 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:49.936 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:49.936 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:22:49.937 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:49.937 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:49.937 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:49.937 13:17:42 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:22:49.937 13:17:42 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:49.937 13:17:42 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:49.937 13:17:42 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:22:49.937 13:17:42 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:22:49.937 13:17:42 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:22:49.937 13:17:42 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:22:49.937 13:17:42 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:22:49.937 13:17:42 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:22:49.937 13:17:42 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:22:49.937 13:17:42 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:49.937 13:17:42 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:22:49.937 13:17:42 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:22:49.937 13:17:42 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:49.937 13:17:42 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:22:49.937 13:17:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:49.937 13:17:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:22:49.937 13:17:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:49.937 13:17:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.937 13:17:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.937 13:17:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.937 13:17:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.937 13:17:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.937 13:17:42 -- paths/export.sh@5 -- $ export PATH 00:22:49.937 13:17:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.937 13:17:42 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:49.937 13:17:42 -- common/autobuild_common.sh@493 -- $ date +%s 00:22:49.937 13:17:42 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733491062.XXXXXX 00:22:49.937 13:17:42 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733491062.NKvzHH 00:22:49.937 13:17:42 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:22:49.937 13:17:42 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:22:49.937 13:17:42 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:49.937 13:17:42 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:49.937 13:17:42 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:49.937 13:17:42 -- common/autobuild_common.sh@509 -- $ get_config_params 00:22:49.937 13:17:42 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:22:49.937 13:17:42 -- common/autotest_common.sh@10 -- $ set +x 00:22:49.937 13:17:42 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:22:49.937 13:17:42 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:22:49.937 13:17:42 -- pm/common@17 -- $ local monitor 00:22:49.937 13:17:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:49.937 13:17:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:49.937 13:17:42 -- pm/common@21 -- $ date +%s 00:22:49.937 13:17:42 -- pm/common@25 -- $ sleep 1 00:22:49.937 13:17:42 -- pm/common@21 -- $ date +%s 00:22:49.937 13:17:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733491062 00:22:49.937 13:17:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733491062 00:22:50.219 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733491062_collect-vmstat.pm.log 00:22:50.219 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733491062_collect-cpu-load.pm.log 00:22:51.156 13:17:43 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:22:51.156 13:17:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:22:51.156 13:17:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:22:51.156 13:17:43 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:51.156 13:17:43 -- spdk/autobuild.sh@16 -- $ date -u 00:22:51.156 Fri Dec 6 01:17:44 PM UTC 2024 00:22:51.156 13:17:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:22:51.156 v25.01-pre-310-g88d8055fc 00:22:51.156 13:17:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:22:51.156 13:17:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:22:51.156 13:17:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:22:51.156 13:17:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:22:51.156 13:17:44 -- common/autotest_common.sh@10 -- $ set +x 00:22:51.156 ************************************ 00:22:51.156 START TEST asan 00:22:51.156 ************************************ 00:22:51.156 using asan 00:22:51.156 13:17:44 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:22:51.156 00:22:51.156 real 0m0.000s 00:22:51.156 user 0m0.000s 00:22:51.156 sys 0m0.000s 00:22:51.156 13:17:44 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:22:51.156 13:17:44 asan -- common/autotest_common.sh@10 -- $ set +x 00:22:51.156 ************************************ 00:22:51.156 END TEST asan 00:22:51.156 ************************************ 00:22:51.157 13:17:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:22:51.157 13:17:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:22:51.157 13:17:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:22:51.157 13:17:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:22:51.157 13:17:44 -- common/autotest_common.sh@10 -- $ set +x 00:22:51.157 ************************************ 00:22:51.157 START TEST ubsan 00:22:51.157 ************************************ 00:22:51.157 using ubsan 00:22:51.157 13:17:44 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:22:51.157 00:22:51.157 real 0m0.000s 00:22:51.157 user 0m0.000s 00:22:51.157 sys 0m0.000s 00:22:51.157 13:17:44 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:22:51.157 ************************************ 00:22:51.157 13:17:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:22:51.157 END TEST ubsan 00:22:51.157 ************************************ 00:22:51.157 13:17:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:22:51.157 13:17:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:22:51.157 13:17:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:22:51.157 13:17:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:22:51.157 13:17:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:22:51.157 13:17:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:22:51.157 13:17:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:22:51.157 13:17:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:22:51.157 13:17:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:22:51.417 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:51.417 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:22:51.675 Using 'verbs' RDMA provider 00:23:07.919 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:23:22.799 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:23:23.057 Creating mk/config.mk...done. 00:23:23.057 Creating mk/cc.flags.mk...done. 00:23:23.057 Type 'make' to build. 00:23:23.057 13:18:15 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:23:23.057 13:18:15 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:23:23.057 13:18:15 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:23:23.057 13:18:15 -- common/autotest_common.sh@10 -- $ set +x 00:23:23.057 ************************************ 00:23:23.057 START TEST make 00:23:23.057 ************************************ 00:23:23.057 13:18:15 make -- common/autotest_common.sh@1129 -- $ make -j10 00:23:23.316 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:23:23.316 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:23:23.316 meson setup builddir \ 00:23:23.316 -Dwith-libaio=enabled \ 00:23:23.316 -Dwith-liburing=enabled \ 00:23:23.316 -Dwith-libvfn=disabled \ 00:23:23.316 -Dwith-spdk=disabled \ 00:23:23.316 -Dexamples=false \ 00:23:23.316 -Dtests=false \ 00:23:23.316 -Dtools=false && \ 00:23:23.316 meson compile -C builddir && \ 00:23:23.316 cd -) 00:23:23.316 make[1]: Nothing to be done for 'all'. 00:23:26.658 The Meson build system 00:23:26.658 Version: 1.5.0 00:23:26.658 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:23:26.658 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:23:26.658 Build type: native build 00:23:26.658 Project name: xnvme 00:23:26.658 Project version: 0.7.5 00:23:26.658 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:23:26.658 C linker for the host machine: cc ld.bfd 2.40-14 00:23:26.658 Host machine cpu family: x86_64 00:23:26.658 Host machine cpu: x86_64 00:23:26.658 Message: host_machine.system: linux 00:23:26.658 Compiler for C supports arguments -Wno-missing-braces: YES 00:23:26.658 Compiler for C supports arguments -Wno-cast-function-type: YES 00:23:26.658 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:23:26.658 Run-time dependency threads found: YES 00:23:26.658 Has header "setupapi.h" : NO 00:23:26.658 Has header "linux/blkzoned.h" : YES 00:23:26.658 Has header "linux/blkzoned.h" : YES (cached) 00:23:26.658 Has header "libaio.h" : YES 00:23:26.658 Library aio found: YES 00:23:26.658 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:23:26.658 Run-time dependency liburing found: YES 2.2 00:23:26.658 Dependency libvfn skipped: feature with-libvfn disabled 00:23:26.658 Found CMake: /usr/bin/cmake (3.27.7) 00:23:26.658 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:23:26.658 Subproject spdk : skipped: feature with-spdk disabled 00:23:26.658 Run-time dependency appleframeworks found: NO (tried framework) 00:23:26.658 Run-time dependency appleframeworks found: NO (tried framework) 00:23:26.658 Library rt found: YES 00:23:26.658 Checking for function "clock_gettime" with dependency -lrt: YES 00:23:26.658 Configuring xnvme_config.h using configuration 00:23:26.658 Configuring xnvme.spec using configuration 00:23:26.658 Run-time dependency bash-completion found: YES 2.11 00:23:26.658 Message: Bash-completions: /usr/share/bash-completion/completions 00:23:26.658 Program cp found: YES (/usr/bin/cp) 00:23:26.658 Build targets in project: 3 00:23:26.658 00:23:26.658 xnvme 0.7.5 00:23:26.658 00:23:26.658 Subprojects 00:23:26.658 spdk : NO Feature 'with-spdk' disabled 00:23:26.658 00:23:26.658 User defined options 00:23:26.658 examples : false 00:23:26.658 tests : false 00:23:26.658 tools : false 00:23:26.658 with-libaio : enabled 00:23:26.658 with-liburing: enabled 00:23:26.658 with-libvfn : disabled 00:23:26.658 with-spdk : disabled 00:23:26.658 00:23:26.658 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:26.918 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:23:26.918 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:23:26.918 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:23:26.918 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:23:26.918 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:23:26.918 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:23:26.918 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:23:27.176 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:23:27.176 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:23:27.176 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:23:27.176 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:23:27.176 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:23:27.176 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:23:27.176 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:23:27.176 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:23:27.176 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:23:27.176 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:23:27.176 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:23:27.176 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:23:27.176 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:23:27.176 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:23:27.176 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:23:27.176 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:23:27.436 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:23:27.436 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:23:27.436 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:23:27.436 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:23:27.436 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:23:27.436 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:23:27.436 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:23:27.436 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:23:27.436 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:23:27.436 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:23:27.436 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:23:27.436 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:23:27.436 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:23:27.436 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:23:27.436 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:23:27.436 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:23:27.436 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:23:27.436 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:23:27.436 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:23:27.436 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:23:27.436 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:23:27.436 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:23:27.436 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:23:27.436 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:23:27.436 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:23:27.436 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:23:27.436 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:23:27.436 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:23:27.436 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:23:27.436 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:23:27.696 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:23:27.696 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:23:27.696 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:23:27.696 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:23:27.696 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:23:27.696 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:23:27.696 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:23:27.697 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:23:27.697 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:23:27.697 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:23:27.697 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:23:27.697 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:23:27.697 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:23:27.697 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:23:27.958 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:23:27.958 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:23:27.958 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:23:27.958 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:23:27.958 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:23:27.958 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:23:27.958 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:23:28.216 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:23:28.216 [75/76] Linking static target lib/libxnvme.a 00:23:28.474 [76/76] Linking target lib/libxnvme.so.0.7.5 00:23:28.474 INFO: autodetecting backend as ninja 00:23:28.474 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:23:28.474 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:23:38.440 The Meson build system 00:23:38.440 Version: 1.5.0 00:23:38.440 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:23:38.440 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:23:38.440 Build type: native build 00:23:38.440 Program cat found: YES (/usr/bin/cat) 00:23:38.440 Project name: DPDK 00:23:38.440 Project version: 24.03.0 00:23:38.440 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:23:38.440 C linker for the host machine: cc ld.bfd 2.40-14 00:23:38.440 Host machine cpu family: x86_64 00:23:38.440 Host machine cpu: x86_64 00:23:38.440 Message: ## Building in Developer Mode ## 00:23:38.440 Program pkg-config found: YES (/usr/bin/pkg-config) 00:23:38.440 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:23:38.440 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:23:38.440 Program python3 found: YES (/usr/bin/python3) 00:23:38.440 Program cat found: YES (/usr/bin/cat) 00:23:38.440 Compiler for C supports arguments -march=native: YES 00:23:38.440 Checking for size of "void *" : 8 00:23:38.440 Checking for size of "void *" : 8 (cached) 00:23:38.440 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:23:38.440 Library m found: YES 00:23:38.440 Library numa found: YES 00:23:38.440 Has header "numaif.h" : YES 00:23:38.440 Library fdt found: NO 00:23:38.440 Library execinfo found: NO 00:23:38.440 Has header "execinfo.h" : YES 00:23:38.440 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:23:38.440 Run-time dependency libarchive found: NO (tried pkgconfig) 00:23:38.440 Run-time dependency libbsd found: NO (tried pkgconfig) 00:23:38.440 Run-time dependency jansson found: NO (tried pkgconfig) 00:23:38.440 Run-time dependency openssl found: YES 3.1.1 00:23:38.440 Run-time dependency libpcap found: YES 1.10.4 00:23:38.440 Has header "pcap.h" with dependency libpcap: YES 00:23:38.440 Compiler for C supports arguments -Wcast-qual: YES 00:23:38.440 Compiler for C supports arguments -Wdeprecated: YES 00:23:38.440 Compiler for C supports arguments -Wformat: YES 00:23:38.440 Compiler for C supports arguments -Wformat-nonliteral: NO 00:23:38.440 Compiler for C supports arguments -Wformat-security: NO 00:23:38.440 Compiler for C supports arguments -Wmissing-declarations: YES 00:23:38.440 Compiler for C supports arguments -Wmissing-prototypes: YES 00:23:38.440 Compiler for C supports arguments -Wnested-externs: YES 00:23:38.440 Compiler for C supports arguments -Wold-style-definition: YES 00:23:38.440 Compiler for C supports arguments -Wpointer-arith: YES 00:23:38.440 Compiler for C supports arguments -Wsign-compare: YES 00:23:38.440 Compiler for C supports arguments -Wstrict-prototypes: YES 00:23:38.440 Compiler for C supports arguments -Wundef: YES 00:23:38.440 Compiler for C supports arguments -Wwrite-strings: YES 00:23:38.440 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:23:38.440 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:23:38.441 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:23:38.441 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:23:38.441 Program objdump found: YES (/usr/bin/objdump) 00:23:38.441 Compiler for C supports arguments -mavx512f: YES 00:23:38.441 Checking if "AVX512 checking" compiles: YES 00:23:38.441 Fetching value of define "__SSE4_2__" : 1 00:23:38.441 Fetching value of define "__AES__" : 1 00:23:38.441 Fetching value of define "__AVX__" : 1 00:23:38.441 Fetching value of define "__AVX2__" : 1 00:23:38.441 Fetching value of define "__AVX512BW__" : 1 00:23:38.441 Fetching value of define "__AVX512CD__" : 1 00:23:38.441 Fetching value of define "__AVX512DQ__" : 1 00:23:38.441 Fetching value of define "__AVX512F__" : 1 00:23:38.441 Fetching value of define "__AVX512VL__" : 1 00:23:38.441 Fetching value of define "__PCLMUL__" : 1 00:23:38.441 Fetching value of define "__RDRND__" : 1 00:23:38.441 Fetching value of define "__RDSEED__" : 1 00:23:38.441 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:23:38.441 Fetching value of define "__znver1__" : (undefined) 00:23:38.441 Fetching value of define "__znver2__" : (undefined) 00:23:38.441 Fetching value of define "__znver3__" : (undefined) 00:23:38.441 Fetching value of define "__znver4__" : (undefined) 00:23:38.441 Library asan found: YES 00:23:38.441 Compiler for C supports arguments -Wno-format-truncation: YES 00:23:38.441 Message: lib/log: Defining dependency "log" 00:23:38.441 Message: lib/kvargs: Defining dependency "kvargs" 00:23:38.441 Message: lib/telemetry: Defining dependency "telemetry" 00:23:38.441 Library rt found: YES 00:23:38.441 Checking for function "getentropy" : NO 00:23:38.441 Message: lib/eal: Defining dependency "eal" 00:23:38.441 Message: lib/ring: Defining dependency "ring" 00:23:38.441 Message: lib/rcu: Defining dependency "rcu" 00:23:38.441 Message: lib/mempool: Defining dependency "mempool" 00:23:38.441 Message: lib/mbuf: Defining dependency "mbuf" 00:23:38.441 Fetching value of define "__PCLMUL__" : 1 (cached) 00:23:38.441 Fetching value of define "__AVX512F__" : 1 (cached) 00:23:38.441 Fetching value of define "__AVX512BW__" : 1 (cached) 00:23:38.441 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:23:38.441 Fetching value of define "__AVX512VL__" : 1 (cached) 00:23:38.441 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:23:38.441 Compiler for C supports arguments -mpclmul: YES 00:23:38.441 Compiler for C supports arguments -maes: YES 00:23:38.441 Compiler for C supports arguments -mavx512f: YES (cached) 00:23:38.441 Compiler for C supports arguments -mavx512bw: YES 00:23:38.441 Compiler for C supports arguments -mavx512dq: YES 00:23:38.441 Compiler for C supports arguments -mavx512vl: YES 00:23:38.441 Compiler for C supports arguments -mvpclmulqdq: YES 00:23:38.441 Compiler for C supports arguments -mavx2: YES 00:23:38.441 Compiler for C supports arguments -mavx: YES 00:23:38.441 Message: lib/net: Defining dependency "net" 00:23:38.441 Message: lib/meter: Defining dependency "meter" 00:23:38.441 Message: lib/ethdev: Defining dependency "ethdev" 00:23:38.441 Message: lib/pci: Defining dependency "pci" 00:23:38.441 Message: lib/cmdline: Defining dependency "cmdline" 00:23:38.441 Message: lib/hash: Defining dependency "hash" 00:23:38.441 Message: lib/timer: Defining dependency "timer" 00:23:38.441 Message: lib/compressdev: Defining dependency "compressdev" 00:23:38.441 Message: lib/cryptodev: Defining dependency "cryptodev" 00:23:38.441 Message: lib/dmadev: Defining dependency "dmadev" 00:23:38.441 Compiler for C supports arguments -Wno-cast-qual: YES 00:23:38.441 Message: lib/power: Defining dependency "power" 00:23:38.441 Message: lib/reorder: Defining dependency "reorder" 00:23:38.441 Message: lib/security: Defining dependency "security" 00:23:38.441 Has header "linux/userfaultfd.h" : YES 00:23:38.441 Has header "linux/vduse.h" : YES 00:23:38.441 Message: lib/vhost: Defining dependency "vhost" 00:23:38.441 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:23:38.441 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:23:38.441 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:23:38.441 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:23:38.441 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:23:38.441 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:23:38.441 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:23:38.441 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:23:38.441 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:23:38.441 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:23:38.441 Program doxygen found: YES (/usr/local/bin/doxygen) 00:23:38.441 Configuring doxy-api-html.conf using configuration 00:23:38.441 Configuring doxy-api-man.conf using configuration 00:23:38.441 Program mandb found: YES (/usr/bin/mandb) 00:23:38.441 Program sphinx-build found: NO 00:23:38.441 Configuring rte_build_config.h using configuration 00:23:38.441 Message: 00:23:38.441 ================= 00:23:38.441 Applications Enabled 00:23:38.441 ================= 00:23:38.441 00:23:38.441 apps: 00:23:38.441 00:23:38.441 00:23:38.441 Message: 00:23:38.441 ================= 00:23:38.441 Libraries Enabled 00:23:38.441 ================= 00:23:38.441 00:23:38.441 libs: 00:23:38.441 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:23:38.441 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:23:38.441 cryptodev, dmadev, power, reorder, security, vhost, 00:23:38.441 00:23:38.441 Message: 00:23:38.441 =============== 00:23:38.441 Drivers Enabled 00:23:38.441 =============== 00:23:38.441 00:23:38.441 common: 00:23:38.441 00:23:38.441 bus: 00:23:38.441 pci, vdev, 00:23:38.441 mempool: 00:23:38.441 ring, 00:23:38.441 dma: 00:23:38.441 00:23:38.441 net: 00:23:38.441 00:23:38.441 crypto: 00:23:38.441 00:23:38.441 compress: 00:23:38.441 00:23:38.441 vdpa: 00:23:38.441 00:23:38.441 00:23:38.441 Message: 00:23:38.441 ================= 00:23:38.441 Content Skipped 00:23:38.441 ================= 00:23:38.441 00:23:38.441 apps: 00:23:38.441 dumpcap: explicitly disabled via build config 00:23:38.441 graph: explicitly disabled via build config 00:23:38.441 pdump: explicitly disabled via build config 00:23:38.441 proc-info: explicitly disabled via build config 00:23:38.441 test-acl: explicitly disabled via build config 00:23:38.441 test-bbdev: explicitly disabled via build config 00:23:38.441 test-cmdline: explicitly disabled via build config 00:23:38.441 test-compress-perf: explicitly disabled via build config 00:23:38.441 test-crypto-perf: explicitly disabled via build config 00:23:38.441 test-dma-perf: explicitly disabled via build config 00:23:38.441 test-eventdev: explicitly disabled via build config 00:23:38.441 test-fib: explicitly disabled via build config 00:23:38.441 test-flow-perf: explicitly disabled via build config 00:23:38.441 test-gpudev: explicitly disabled via build config 00:23:38.441 test-mldev: explicitly disabled via build config 00:23:38.441 test-pipeline: explicitly disabled via build config 00:23:38.441 test-pmd: explicitly disabled via build config 00:23:38.441 test-regex: explicitly disabled via build config 00:23:38.441 test-sad: explicitly disabled via build config 00:23:38.441 test-security-perf: explicitly disabled via build config 00:23:38.441 00:23:38.441 libs: 00:23:38.441 argparse: explicitly disabled via build config 00:23:38.441 metrics: explicitly disabled via build config 00:23:38.441 acl: explicitly disabled via build config 00:23:38.441 bbdev: explicitly disabled via build config 00:23:38.441 bitratestats: explicitly disabled via build config 00:23:38.441 bpf: explicitly disabled via build config 00:23:38.441 cfgfile: explicitly disabled via build config 00:23:38.441 distributor: explicitly disabled via build config 00:23:38.441 efd: explicitly disabled via build config 00:23:38.441 eventdev: explicitly disabled via build config 00:23:38.441 dispatcher: explicitly disabled via build config 00:23:38.441 gpudev: explicitly disabled via build config 00:23:38.441 gro: explicitly disabled via build config 00:23:38.441 gso: explicitly disabled via build config 00:23:38.441 ip_frag: explicitly disabled via build config 00:23:38.441 jobstats: explicitly disabled via build config 00:23:38.441 latencystats: explicitly disabled via build config 00:23:38.441 lpm: explicitly disabled via build config 00:23:38.441 member: explicitly disabled via build config 00:23:38.441 pcapng: explicitly disabled via build config 00:23:38.441 rawdev: explicitly disabled via build config 00:23:38.441 regexdev: explicitly disabled via build config 00:23:38.441 mldev: explicitly disabled via build config 00:23:38.441 rib: explicitly disabled via build config 00:23:38.441 sched: explicitly disabled via build config 00:23:38.441 stack: explicitly disabled via build config 00:23:38.441 ipsec: explicitly disabled via build config 00:23:38.441 pdcp: explicitly disabled via build config 00:23:38.441 fib: explicitly disabled via build config 00:23:38.441 port: explicitly disabled via build config 00:23:38.441 pdump: explicitly disabled via build config 00:23:38.441 table: explicitly disabled via build config 00:23:38.441 pipeline: explicitly disabled via build config 00:23:38.441 graph: explicitly disabled via build config 00:23:38.441 node: explicitly disabled via build config 00:23:38.441 00:23:38.441 drivers: 00:23:38.441 common/cpt: not in enabled drivers build config 00:23:38.441 common/dpaax: not in enabled drivers build config 00:23:38.441 common/iavf: not in enabled drivers build config 00:23:38.441 common/idpf: not in enabled drivers build config 00:23:38.441 common/ionic: not in enabled drivers build config 00:23:38.441 common/mvep: not in enabled drivers build config 00:23:38.441 common/octeontx: not in enabled drivers build config 00:23:38.441 bus/auxiliary: not in enabled drivers build config 00:23:38.441 bus/cdx: not in enabled drivers build config 00:23:38.441 bus/dpaa: not in enabled drivers build config 00:23:38.441 bus/fslmc: not in enabled drivers build config 00:23:38.441 bus/ifpga: not in enabled drivers build config 00:23:38.441 bus/platform: not in enabled drivers build config 00:23:38.441 bus/uacce: not in enabled drivers build config 00:23:38.441 bus/vmbus: not in enabled drivers build config 00:23:38.442 common/cnxk: not in enabled drivers build config 00:23:38.442 common/mlx5: not in enabled drivers build config 00:23:38.442 common/nfp: not in enabled drivers build config 00:23:38.442 common/nitrox: not in enabled drivers build config 00:23:38.442 common/qat: not in enabled drivers build config 00:23:38.442 common/sfc_efx: not in enabled drivers build config 00:23:38.442 mempool/bucket: not in enabled drivers build config 00:23:38.442 mempool/cnxk: not in enabled drivers build config 00:23:38.442 mempool/dpaa: not in enabled drivers build config 00:23:38.442 mempool/dpaa2: not in enabled drivers build config 00:23:38.442 mempool/octeontx: not in enabled drivers build config 00:23:38.442 mempool/stack: not in enabled drivers build config 00:23:38.442 dma/cnxk: not in enabled drivers build config 00:23:38.442 dma/dpaa: not in enabled drivers build config 00:23:38.442 dma/dpaa2: not in enabled drivers build config 00:23:38.442 dma/hisilicon: not in enabled drivers build config 00:23:38.442 dma/idxd: not in enabled drivers build config 00:23:38.442 dma/ioat: not in enabled drivers build config 00:23:38.442 dma/skeleton: not in enabled drivers build config 00:23:38.442 net/af_packet: not in enabled drivers build config 00:23:38.442 net/af_xdp: not in enabled drivers build config 00:23:38.442 net/ark: not in enabled drivers build config 00:23:38.442 net/atlantic: not in enabled drivers build config 00:23:38.442 net/avp: not in enabled drivers build config 00:23:38.442 net/axgbe: not in enabled drivers build config 00:23:38.442 net/bnx2x: not in enabled drivers build config 00:23:38.442 net/bnxt: not in enabled drivers build config 00:23:38.442 net/bonding: not in enabled drivers build config 00:23:38.442 net/cnxk: not in enabled drivers build config 00:23:38.442 net/cpfl: not in enabled drivers build config 00:23:38.442 net/cxgbe: not in enabled drivers build config 00:23:38.442 net/dpaa: not in enabled drivers build config 00:23:38.442 net/dpaa2: not in enabled drivers build config 00:23:38.442 net/e1000: not in enabled drivers build config 00:23:38.442 net/ena: not in enabled drivers build config 00:23:38.442 net/enetc: not in enabled drivers build config 00:23:38.442 net/enetfec: not in enabled drivers build config 00:23:38.442 net/enic: not in enabled drivers build config 00:23:38.442 net/failsafe: not in enabled drivers build config 00:23:38.442 net/fm10k: not in enabled drivers build config 00:23:38.442 net/gve: not in enabled drivers build config 00:23:38.442 net/hinic: not in enabled drivers build config 00:23:38.442 net/hns3: not in enabled drivers build config 00:23:38.442 net/i40e: not in enabled drivers build config 00:23:38.442 net/iavf: not in enabled drivers build config 00:23:38.442 net/ice: not in enabled drivers build config 00:23:38.442 net/idpf: not in enabled drivers build config 00:23:38.442 net/igc: not in enabled drivers build config 00:23:38.442 net/ionic: not in enabled drivers build config 00:23:38.442 net/ipn3ke: not in enabled drivers build config 00:23:38.442 net/ixgbe: not in enabled drivers build config 00:23:38.442 net/mana: not in enabled drivers build config 00:23:38.442 net/memif: not in enabled drivers build config 00:23:38.442 net/mlx4: not in enabled drivers build config 00:23:38.442 net/mlx5: not in enabled drivers build config 00:23:38.442 net/mvneta: not in enabled drivers build config 00:23:38.442 net/mvpp2: not in enabled drivers build config 00:23:38.442 net/netvsc: not in enabled drivers build config 00:23:38.442 net/nfb: not in enabled drivers build config 00:23:38.442 net/nfp: not in enabled drivers build config 00:23:38.442 net/ngbe: not in enabled drivers build config 00:23:38.442 net/null: not in enabled drivers build config 00:23:38.442 net/octeontx: not in enabled drivers build config 00:23:38.442 net/octeon_ep: not in enabled drivers build config 00:23:38.442 net/pcap: not in enabled drivers build config 00:23:38.442 net/pfe: not in enabled drivers build config 00:23:38.442 net/qede: not in enabled drivers build config 00:23:38.442 net/ring: not in enabled drivers build config 00:23:38.442 net/sfc: not in enabled drivers build config 00:23:38.442 net/softnic: not in enabled drivers build config 00:23:38.442 net/tap: not in enabled drivers build config 00:23:38.442 net/thunderx: not in enabled drivers build config 00:23:38.442 net/txgbe: not in enabled drivers build config 00:23:38.442 net/vdev_netvsc: not in enabled drivers build config 00:23:38.442 net/vhost: not in enabled drivers build config 00:23:38.442 net/virtio: not in enabled drivers build config 00:23:38.442 net/vmxnet3: not in enabled drivers build config 00:23:38.442 raw/*: missing internal dependency, "rawdev" 00:23:38.442 crypto/armv8: not in enabled drivers build config 00:23:38.442 crypto/bcmfs: not in enabled drivers build config 00:23:38.442 crypto/caam_jr: not in enabled drivers build config 00:23:38.442 crypto/ccp: not in enabled drivers build config 00:23:38.442 crypto/cnxk: not in enabled drivers build config 00:23:38.442 crypto/dpaa_sec: not in enabled drivers build config 00:23:38.442 crypto/dpaa2_sec: not in enabled drivers build config 00:23:38.442 crypto/ipsec_mb: not in enabled drivers build config 00:23:38.442 crypto/mlx5: not in enabled drivers build config 00:23:38.442 crypto/mvsam: not in enabled drivers build config 00:23:38.442 crypto/nitrox: not in enabled drivers build config 00:23:38.442 crypto/null: not in enabled drivers build config 00:23:38.442 crypto/octeontx: not in enabled drivers build config 00:23:38.442 crypto/openssl: not in enabled drivers build config 00:23:38.442 crypto/scheduler: not in enabled drivers build config 00:23:38.442 crypto/uadk: not in enabled drivers build config 00:23:38.442 crypto/virtio: not in enabled drivers build config 00:23:38.442 compress/isal: not in enabled drivers build config 00:23:38.442 compress/mlx5: not in enabled drivers build config 00:23:38.442 compress/nitrox: not in enabled drivers build config 00:23:38.442 compress/octeontx: not in enabled drivers build config 00:23:38.442 compress/zlib: not in enabled drivers build config 00:23:38.442 regex/*: missing internal dependency, "regexdev" 00:23:38.442 ml/*: missing internal dependency, "mldev" 00:23:38.442 vdpa/ifc: not in enabled drivers build config 00:23:38.442 vdpa/mlx5: not in enabled drivers build config 00:23:38.442 vdpa/nfp: not in enabled drivers build config 00:23:38.442 vdpa/sfc: not in enabled drivers build config 00:23:38.442 event/*: missing internal dependency, "eventdev" 00:23:38.442 baseband/*: missing internal dependency, "bbdev" 00:23:38.442 gpu/*: missing internal dependency, "gpudev" 00:23:38.442 00:23:38.442 00:23:38.442 Build targets in project: 85 00:23:38.442 00:23:38.442 DPDK 24.03.0 00:23:38.442 00:23:38.442 User defined options 00:23:38.442 buildtype : debug 00:23:38.442 default_library : shared 00:23:38.442 libdir : lib 00:23:38.442 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:38.442 b_sanitize : address 00:23:38.442 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:23:38.442 c_link_args : 00:23:38.442 cpu_instruction_set: native 00:23:38.442 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:23:38.442 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:23:38.442 enable_docs : false 00:23:38.442 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:23:38.442 enable_kmods : false 00:23:38.442 max_lcores : 128 00:23:38.442 tests : false 00:23:38.442 00:23:38.442 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:38.442 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:23:38.442 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:23:38.442 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:23:38.442 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:23:38.442 [4/268] Linking static target lib/librte_kvargs.a 00:23:38.442 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:23:38.442 [6/268] Linking static target lib/librte_log.a 00:23:39.008 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:23:39.008 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:23:39.008 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:23:39.008 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:23:39.008 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:23:39.008 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:23:39.008 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:23:39.008 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:23:39.267 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:23:39.267 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:23:39.267 [17/268] Linking static target lib/librte_telemetry.a 00:23:39.267 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:23:39.525 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:23:39.786 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:23:39.786 [21/268] Linking target lib/librte_log.so.24.1 00:23:39.786 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:23:39.786 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:23:39.786 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:23:39.786 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:23:40.045 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:23:40.045 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:23:40.045 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:23:40.045 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:23:40.045 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:23:40.045 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:23:40.305 [32/268] Linking target lib/librte_kvargs.so.24.1 00:23:40.305 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:23:40.305 [34/268] Linking target lib/librte_telemetry.so.24.1 00:23:40.565 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:23:40.565 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:23:40.565 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:23:40.565 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:23:40.565 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:23:40.565 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:23:40.565 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:23:40.565 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:23:40.824 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:23:40.824 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:23:40.824 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:23:41.084 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:23:41.084 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:23:41.084 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:23:41.084 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:23:41.342 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:23:41.342 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:23:41.342 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:23:41.664 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:23:41.664 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:23:41.664 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:23:41.664 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:23:41.664 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:23:41.923 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:23:41.923 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:23:41.923 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:23:41.923 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:23:41.923 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:23:41.923 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:23:42.179 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:23:42.179 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:23:42.179 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:23:42.437 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:23:42.437 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:23:42.437 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:23:42.695 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:23:42.695 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:23:42.695 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:23:42.695 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:23:42.695 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:23:42.952 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:23:42.952 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:23:43.210 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:23:43.210 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:23:43.210 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:23:43.210 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:23:43.210 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:23:43.210 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:23:43.468 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:23:43.468 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:23:43.726 [85/268] Linking static target lib/librte_eal.a 00:23:43.726 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:23:43.726 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:23:43.726 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:23:43.726 [89/268] Linking static target lib/librte_ring.a 00:23:43.726 [90/268] Linking static target lib/librte_rcu.a 00:23:43.984 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:23:43.984 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:23:43.984 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:23:43.984 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:23:43.984 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:23:43.984 [96/268] Linking static target lib/librte_mempool.a 00:23:44.551 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:23:44.551 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:23:44.551 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:23:44.551 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:23:44.551 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:23:44.551 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:23:44.809 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:23:44.809 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:23:44.809 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:23:44.809 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:23:44.809 [107/268] Linking static target lib/librte_meter.a 00:23:45.066 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:23:45.066 [109/268] Linking static target lib/librte_net.a 00:23:45.066 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:23:45.066 [111/268] Linking static target lib/librte_mbuf.a 00:23:45.325 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:23:45.325 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:23:45.325 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:23:45.584 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:23:45.584 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:23:45.584 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:23:45.584 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:23:46.153 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:23:46.153 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:23:46.153 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:23:46.153 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:23:46.412 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:23:46.670 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:23:46.670 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:23:46.670 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:23:46.670 [127/268] Linking static target lib/librte_pci.a 00:23:46.670 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:23:46.929 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:23:46.929 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:23:46.929 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:23:46.929 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:23:46.929 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:23:47.187 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:23:47.187 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:23:47.187 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:23:47.187 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:23:47.187 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:23:47.187 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:47.187 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:23:47.187 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:23:47.187 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:23:47.445 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:23:47.445 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:23:47.445 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:23:47.704 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:23:47.704 [147/268] Linking static target lib/librte_cmdline.a 00:23:47.704 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:23:47.962 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:23:48.222 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:23:48.222 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:23:48.222 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:23:48.222 [153/268] Linking static target lib/librte_ethdev.a 00:23:48.222 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:23:48.480 [155/268] Linking static target lib/librte_timer.a 00:23:48.481 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:23:48.481 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:23:48.739 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:23:48.739 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:23:48.739 [160/268] Linking static target lib/librte_compressdev.a 00:23:48.739 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:23:48.997 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:23:48.997 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:23:49.256 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:23:49.256 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:23:49.256 [166/268] Linking static target lib/librte_hash.a 00:23:49.256 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:23:49.256 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:23:49.256 [169/268] Linking static target lib/librte_dmadev.a 00:23:49.515 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:23:49.515 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:23:49.515 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:23:49.775 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:23:49.775 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:23:50.058 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:50.058 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:23:50.058 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:23:50.058 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:23:50.317 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:23:50.317 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:23:50.317 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:50.575 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:23:50.575 [183/268] Linking static target lib/librte_cryptodev.a 00:23:50.575 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:23:50.575 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:23:50.575 [186/268] Linking static target lib/librte_power.a 00:23:50.834 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:23:50.834 [188/268] Linking static target lib/librte_reorder.a 00:23:50.834 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:23:50.834 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:23:51.091 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:23:51.091 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:23:51.091 [193/268] Linking static target lib/librte_security.a 00:23:51.656 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:23:51.656 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:23:52.224 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:23:52.224 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:23:52.224 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:23:52.224 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:23:52.224 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:23:52.793 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:23:52.793 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:23:52.793 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:23:52.793 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:23:52.793 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:23:53.050 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:23:53.307 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:23:53.307 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:23:53.307 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:23:53.307 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:23:53.565 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:53.565 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:23:53.565 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:53.565 [214/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:23:53.565 [215/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:23:53.565 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:53.824 [217/268] Linking static target drivers/librte_bus_vdev.a 00:23:53.824 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:23:53.824 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:53.824 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:53.824 [221/268] Linking static target drivers/librte_bus_pci.a 00:23:54.082 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:23:54.082 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:54.082 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:54.082 [225/268] Linking static target drivers/librte_mempool_ring.a 00:23:54.082 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:54.649 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:55.599 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:23:56.978 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:23:57.237 [230/268] Linking target lib/librte_eal.so.24.1 00:23:57.237 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:23:57.237 [232/268] Linking target lib/librte_meter.so.24.1 00:23:57.237 [233/268] Linking target lib/librte_ring.so.24.1 00:23:57.237 [234/268] Linking target lib/librte_pci.so.24.1 00:23:57.237 [235/268] Linking target lib/librte_timer.so.24.1 00:23:57.237 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:23:57.237 [237/268] Linking target lib/librte_dmadev.so.24.1 00:23:57.495 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:23:57.495 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:23:57.495 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:23:57.495 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:23:57.495 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:23:57.495 [243/268] Linking target lib/librte_rcu.so.24.1 00:23:57.495 [244/268] Linking target lib/librte_mempool.so.24.1 00:23:57.495 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:23:57.752 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:23:57.752 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:23:57.752 [248/268] Linking target lib/librte_mbuf.so.24.1 00:23:57.752 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:23:58.011 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:58.011 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:23:58.011 [252/268] Linking target lib/librte_net.so.24.1 00:23:58.011 [253/268] Linking target lib/librte_reorder.so.24.1 00:23:58.011 [254/268] Linking target lib/librte_compressdev.so.24.1 00:23:58.011 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:23:58.269 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:23:58.269 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:23:58.269 [258/268] Linking target lib/librte_cmdline.so.24.1 00:23:58.269 [259/268] Linking target lib/librte_hash.so.24.1 00:23:58.269 [260/268] Linking target lib/librte_security.so.24.1 00:23:58.269 [261/268] Linking target lib/librte_ethdev.so.24.1 00:23:58.528 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:23:58.528 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:23:58.528 [264/268] Linking target lib/librte_power.so.24.1 00:24:01.815 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:24:01.815 [266/268] Linking static target lib/librte_vhost.a 00:24:03.190 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:24:03.448 [268/268] Linking target lib/librte_vhost.so.24.1 00:24:03.449 INFO: autodetecting backend as ninja 00:24:03.449 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:24:29.991 CC lib/ut/ut.o 00:24:29.991 CC lib/ut_mock/mock.o 00:24:29.991 CC lib/log/log_flags.o 00:24:29.991 CC lib/log/log_deprecated.o 00:24:29.991 CC lib/log/log.o 00:24:29.991 LIB libspdk_ut_mock.a 00:24:29.991 LIB libspdk_log.a 00:24:29.991 LIB libspdk_ut.a 00:24:29.991 SO libspdk_ut_mock.so.6.0 00:24:29.991 SO libspdk_log.so.7.1 00:24:29.991 SO libspdk_ut.so.2.0 00:24:29.991 SYMLINK libspdk_ut_mock.so 00:24:29.991 SYMLINK libspdk_ut.so 00:24:29.991 SYMLINK libspdk_log.so 00:24:29.991 CC lib/ioat/ioat.o 00:24:29.991 CC lib/util/base64.o 00:24:29.991 CC lib/dma/dma.o 00:24:29.991 CC lib/util/bit_array.o 00:24:29.991 CC lib/util/crc16.o 00:24:29.991 CC lib/util/cpuset.o 00:24:29.991 CC lib/util/crc32.o 00:24:29.991 CC lib/util/crc32c.o 00:24:29.991 CXX lib/trace_parser/trace.o 00:24:29.991 CC lib/vfio_user/host/vfio_user_pci.o 00:24:29.991 CC lib/vfio_user/host/vfio_user.o 00:24:29.991 CC lib/util/crc32_ieee.o 00:24:29.991 CC lib/util/crc64.o 00:24:29.991 CC lib/util/dif.o 00:24:29.991 LIB libspdk_dma.a 00:24:29.991 SO libspdk_dma.so.5.0 00:24:29.991 CC lib/util/fd.o 00:24:29.991 CC lib/util/fd_group.o 00:24:29.991 SYMLINK libspdk_dma.so 00:24:29.991 CC lib/util/file.o 00:24:29.991 LIB libspdk_ioat.a 00:24:29.991 CC lib/util/hexlify.o 00:24:29.991 CC lib/util/iov.o 00:24:29.991 SO libspdk_ioat.so.7.0 00:24:29.991 CC lib/util/math.o 00:24:29.991 LIB libspdk_vfio_user.a 00:24:29.991 SYMLINK libspdk_ioat.so 00:24:29.991 CC lib/util/net.o 00:24:29.991 SO libspdk_vfio_user.so.5.0 00:24:29.991 CC lib/util/pipe.o 00:24:29.991 SYMLINK libspdk_vfio_user.so 00:24:29.992 CC lib/util/strerror_tls.o 00:24:29.992 CC lib/util/string.o 00:24:29.992 CC lib/util/uuid.o 00:24:29.992 CC lib/util/xor.o 00:24:29.992 CC lib/util/zipf.o 00:24:29.992 CC lib/util/md5.o 00:24:29.992 LIB libspdk_util.a 00:24:29.992 LIB libspdk_trace_parser.a 00:24:29.992 SO libspdk_trace_parser.so.6.0 00:24:29.992 SO libspdk_util.so.10.1 00:24:29.992 SYMLINK libspdk_trace_parser.so 00:24:29.992 SYMLINK libspdk_util.so 00:24:29.992 CC lib/vmd/vmd.o 00:24:29.992 CC lib/vmd/led.o 00:24:29.992 CC lib/idxd/idxd.o 00:24:29.992 CC lib/idxd/idxd_user.o 00:24:29.992 CC lib/idxd/idxd_kernel.o 00:24:29.992 CC lib/env_dpdk/env.o 00:24:29.992 CC lib/env_dpdk/memory.o 00:24:29.992 CC lib/conf/conf.o 00:24:29.992 CC lib/json/json_parse.o 00:24:29.992 CC lib/rdma_utils/rdma_utils.o 00:24:30.248 CC lib/env_dpdk/pci.o 00:24:30.248 CC lib/env_dpdk/init.o 00:24:30.248 LIB libspdk_conf.a 00:24:30.248 SO libspdk_conf.so.6.0 00:24:30.248 CC lib/json/json_util.o 00:24:30.248 SYMLINK libspdk_conf.so 00:24:30.248 CC lib/env_dpdk/threads.o 00:24:30.248 CC lib/json/json_write.o 00:24:30.504 LIB libspdk_rdma_utils.a 00:24:30.504 SO libspdk_rdma_utils.so.1.0 00:24:30.504 SYMLINK libspdk_rdma_utils.so 00:24:30.504 CC lib/env_dpdk/pci_ioat.o 00:24:30.504 CC lib/env_dpdk/pci_virtio.o 00:24:30.760 CC lib/env_dpdk/pci_vmd.o 00:24:30.760 LIB libspdk_json.a 00:24:30.760 LIB libspdk_idxd.a 00:24:30.760 CC lib/env_dpdk/pci_idxd.o 00:24:30.760 SO libspdk_json.so.6.0 00:24:30.760 CC lib/env_dpdk/pci_event.o 00:24:30.760 CC lib/env_dpdk/sigbus_handler.o 00:24:30.760 SO libspdk_idxd.so.12.1 00:24:30.760 CC lib/rdma_provider/common.o 00:24:31.016 CC lib/env_dpdk/pci_dpdk.o 00:24:31.016 SYMLINK libspdk_json.so 00:24:31.016 CC lib/env_dpdk/pci_dpdk_2207.o 00:24:31.016 CC lib/env_dpdk/pci_dpdk_2211.o 00:24:31.016 LIB libspdk_vmd.a 00:24:31.016 SO libspdk_vmd.so.6.0 00:24:31.016 SYMLINK libspdk_idxd.so 00:24:31.016 CC lib/rdma_provider/rdma_provider_verbs.o 00:24:31.016 SYMLINK libspdk_vmd.so 00:24:31.283 CC lib/jsonrpc/jsonrpc_server.o 00:24:31.283 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:24:31.283 CC lib/jsonrpc/jsonrpc_client.o 00:24:31.283 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:24:31.555 LIB libspdk_rdma_provider.a 00:24:31.555 SO libspdk_rdma_provider.so.7.0 00:24:31.555 SYMLINK libspdk_rdma_provider.so 00:24:31.555 LIB libspdk_jsonrpc.a 00:24:31.812 SO libspdk_jsonrpc.so.6.0 00:24:31.812 SYMLINK libspdk_jsonrpc.so 00:24:32.069 CC lib/rpc/rpc.o 00:24:32.326 LIB libspdk_env_dpdk.a 00:24:32.326 LIB libspdk_rpc.a 00:24:32.326 SO libspdk_env_dpdk.so.15.1 00:24:32.326 SO libspdk_rpc.so.6.0 00:24:32.585 SYMLINK libspdk_rpc.so 00:24:32.585 SYMLINK libspdk_env_dpdk.so 00:24:32.843 CC lib/keyring/keyring.o 00:24:32.844 CC lib/keyring/keyring_rpc.o 00:24:32.844 CC lib/trace/trace.o 00:24:32.844 CC lib/trace/trace_rpc.o 00:24:32.844 CC lib/trace/trace_flags.o 00:24:32.844 CC lib/notify/notify_rpc.o 00:24:32.844 CC lib/notify/notify.o 00:24:33.101 LIB libspdk_notify.a 00:24:33.101 SO libspdk_notify.so.6.0 00:24:33.101 LIB libspdk_keyring.a 00:24:33.101 SYMLINK libspdk_notify.so 00:24:33.101 SO libspdk_keyring.so.2.0 00:24:33.101 LIB libspdk_trace.a 00:24:33.359 SYMLINK libspdk_keyring.so 00:24:33.359 SO libspdk_trace.so.11.0 00:24:33.359 SYMLINK libspdk_trace.so 00:24:33.617 CC lib/thread/thread.o 00:24:33.617 CC lib/thread/iobuf.o 00:24:33.617 CC lib/sock/sock.o 00:24:33.617 CC lib/sock/sock_rpc.o 00:24:34.184 LIB libspdk_sock.a 00:24:34.443 SO libspdk_sock.so.10.0 00:24:34.443 SYMLINK libspdk_sock.so 00:24:34.704 CC lib/nvme/nvme_ctrlr_cmd.o 00:24:34.704 CC lib/nvme/nvme_ns_cmd.o 00:24:34.704 CC lib/nvme/nvme_fabric.o 00:24:34.704 CC lib/nvme/nvme_ctrlr.o 00:24:34.704 CC lib/nvme/nvme_pcie.o 00:24:34.704 CC lib/nvme/nvme_qpair.o 00:24:34.704 CC lib/nvme/nvme_ns.o 00:24:34.704 CC lib/nvme/nvme.o 00:24:34.704 CC lib/nvme/nvme_pcie_common.o 00:24:35.635 CC lib/nvme/nvme_quirks.o 00:24:35.635 LIB libspdk_thread.a 00:24:35.635 SO libspdk_thread.so.11.0 00:24:35.957 CC lib/nvme/nvme_transport.o 00:24:35.957 CC lib/nvme/nvme_discovery.o 00:24:35.957 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:24:35.957 SYMLINK libspdk_thread.so 00:24:35.957 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:24:35.957 CC lib/nvme/nvme_tcp.o 00:24:36.536 CC lib/nvme/nvme_opal.o 00:24:36.536 CC lib/nvme/nvme_io_msg.o 00:24:36.536 CC lib/nvme/nvme_poll_group.o 00:24:36.793 CC lib/nvme/nvme_zns.o 00:24:36.793 CC lib/accel/accel.o 00:24:36.793 CC lib/blob/blobstore.o 00:24:36.793 CC lib/blob/request.o 00:24:36.793 CC lib/init/json_config.o 00:24:37.050 CC lib/init/subsystem.o 00:24:37.308 CC lib/accel/accel_rpc.o 00:24:37.308 CC lib/accel/accel_sw.o 00:24:37.567 CC lib/init/subsystem_rpc.o 00:24:37.567 CC lib/blob/zeroes.o 00:24:37.567 CC lib/nvme/nvme_stubs.o 00:24:37.567 CC lib/virtio/virtio.o 00:24:37.567 CC lib/init/rpc.o 00:24:37.825 CC lib/virtio/virtio_vhost_user.o 00:24:37.825 CC lib/fsdev/fsdev.o 00:24:37.825 CC lib/virtio/virtio_vfio_user.o 00:24:37.825 CC lib/blob/blob_bs_dev.o 00:24:37.825 LIB libspdk_init.a 00:24:37.825 SO libspdk_init.so.6.0 00:24:38.084 CC lib/nvme/nvme_auth.o 00:24:38.084 SYMLINK libspdk_init.so 00:24:38.084 CC lib/nvme/nvme_cuse.o 00:24:38.084 CC lib/nvme/nvme_rdma.o 00:24:38.084 CC lib/fsdev/fsdev_io.o 00:24:38.084 CC lib/virtio/virtio_pci.o 00:24:38.342 CC lib/fsdev/fsdev_rpc.o 00:24:38.600 CC lib/event/app.o 00:24:38.600 CC lib/event/reactor.o 00:24:38.600 LIB libspdk_accel.a 00:24:38.600 SO libspdk_accel.so.16.0 00:24:38.600 CC lib/event/log_rpc.o 00:24:38.600 LIB libspdk_fsdev.a 00:24:38.600 LIB libspdk_virtio.a 00:24:38.600 SYMLINK libspdk_accel.so 00:24:38.858 CC lib/event/app_rpc.o 00:24:38.858 SO libspdk_fsdev.so.2.0 00:24:38.858 SO libspdk_virtio.so.7.0 00:24:38.858 SYMLINK libspdk_fsdev.so 00:24:38.858 SYMLINK libspdk_virtio.so 00:24:38.858 CC lib/event/scheduler_static.o 00:24:39.117 CC lib/bdev/bdev.o 00:24:39.117 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:24:39.117 CC lib/bdev/bdev_rpc.o 00:24:39.117 CC lib/bdev/bdev_zone.o 00:24:39.117 CC lib/bdev/part.o 00:24:39.117 LIB libspdk_event.a 00:24:39.117 CC lib/bdev/scsi_nvme.o 00:24:39.375 SO libspdk_event.so.14.0 00:24:39.375 SYMLINK libspdk_event.so 00:24:39.942 LIB libspdk_fuse_dispatcher.a 00:24:39.942 SO libspdk_fuse_dispatcher.so.1.0 00:24:39.942 LIB libspdk_nvme.a 00:24:40.200 SYMLINK libspdk_fuse_dispatcher.so 00:24:40.200 SO libspdk_nvme.so.15.0 00:24:40.769 SYMLINK libspdk_nvme.so 00:24:41.703 LIB libspdk_blob.a 00:24:41.962 SO libspdk_blob.so.12.0 00:24:41.962 SYMLINK libspdk_blob.so 00:24:42.219 CC lib/lvol/lvol.o 00:24:42.219 CC lib/blobfs/blobfs.o 00:24:42.219 CC lib/blobfs/tree.o 00:24:43.152 LIB libspdk_bdev.a 00:24:43.409 SO libspdk_bdev.so.17.0 00:24:43.409 SYMLINK libspdk_bdev.so 00:24:43.667 LIB libspdk_lvol.a 00:24:43.667 CC lib/ftl/ftl_core.o 00:24:43.667 CC lib/ftl/ftl_init.o 00:24:43.667 CC lib/ftl/ftl_layout.o 00:24:43.667 CC lib/ftl/ftl_debug.o 00:24:43.667 CC lib/nvmf/ctrlr.o 00:24:43.667 CC lib/nbd/nbd.o 00:24:43.667 CC lib/ublk/ublk.o 00:24:43.667 CC lib/scsi/dev.o 00:24:43.667 SO libspdk_lvol.so.11.0 00:24:43.925 LIB libspdk_blobfs.a 00:24:43.925 SYMLINK libspdk_lvol.so 00:24:43.925 CC lib/scsi/lun.o 00:24:43.925 SO libspdk_blobfs.so.11.0 00:24:43.925 SYMLINK libspdk_blobfs.so 00:24:43.925 CC lib/nbd/nbd_rpc.o 00:24:43.925 CC lib/nvmf/ctrlr_discovery.o 00:24:44.182 CC lib/ftl/ftl_io.o 00:24:44.182 CC lib/nvmf/ctrlr_bdev.o 00:24:44.182 CC lib/ublk/ublk_rpc.o 00:24:44.439 CC lib/nvmf/subsystem.o 00:24:44.439 CC lib/scsi/port.o 00:24:44.439 LIB libspdk_nbd.a 00:24:44.439 SO libspdk_nbd.so.7.0 00:24:44.439 CC lib/nvmf/nvmf.o 00:24:44.439 CC lib/nvmf/nvmf_rpc.o 00:24:44.439 SYMLINK libspdk_nbd.so 00:24:44.696 CC lib/nvmf/transport.o 00:24:44.696 CC lib/ftl/ftl_sb.o 00:24:44.696 CC lib/scsi/scsi.o 00:24:44.696 LIB libspdk_ublk.a 00:24:44.954 CC lib/scsi/scsi_bdev.o 00:24:44.954 CC lib/ftl/ftl_l2p.o 00:24:44.954 SO libspdk_ublk.so.3.0 00:24:44.954 CC lib/nvmf/tcp.o 00:24:44.954 SYMLINK libspdk_ublk.so 00:24:44.954 CC lib/nvmf/stubs.o 00:24:45.212 CC lib/ftl/ftl_l2p_flat.o 00:24:45.212 CC lib/scsi/scsi_pr.o 00:24:45.470 CC lib/ftl/ftl_nv_cache.o 00:24:45.731 CC lib/nvmf/mdns_server.o 00:24:45.731 CC lib/nvmf/rdma.o 00:24:45.731 CC lib/scsi/scsi_rpc.o 00:24:45.731 CC lib/scsi/task.o 00:24:45.731 CC lib/nvmf/auth.o 00:24:45.988 CC lib/ftl/ftl_band.o 00:24:45.988 CC lib/ftl/ftl_band_ops.o 00:24:45.988 LIB libspdk_scsi.a 00:24:46.245 SO libspdk_scsi.so.9.0 00:24:46.245 CC lib/ftl/ftl_writer.o 00:24:46.246 SYMLINK libspdk_scsi.so 00:24:46.246 CC lib/ftl/ftl_rq.o 00:24:46.503 CC lib/ftl/ftl_reloc.o 00:24:46.503 CC lib/ftl/ftl_l2p_cache.o 00:24:46.503 CC lib/iscsi/conn.o 00:24:46.503 CC lib/iscsi/init_grp.o 00:24:46.760 CC lib/ftl/ftl_p2l.o 00:24:47.017 CC lib/iscsi/iscsi.o 00:24:47.017 CC lib/iscsi/param.o 00:24:47.018 CC lib/iscsi/portal_grp.o 00:24:47.275 CC lib/vhost/vhost.o 00:24:47.275 CC lib/vhost/vhost_rpc.o 00:24:47.275 CC lib/vhost/vhost_scsi.o 00:24:47.275 CC lib/ftl/ftl_p2l_log.o 00:24:47.275 CC lib/iscsi/tgt_node.o 00:24:47.533 CC lib/ftl/mngt/ftl_mngt.o 00:24:47.533 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:24:47.533 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:24:47.789 CC lib/ftl/mngt/ftl_mngt_startup.o 00:24:47.789 CC lib/ftl/mngt/ftl_mngt_md.o 00:24:47.789 CC lib/ftl/mngt/ftl_mngt_misc.o 00:24:47.789 CC lib/vhost/vhost_blk.o 00:24:48.045 CC lib/iscsi/iscsi_subsystem.o 00:24:48.045 CC lib/vhost/rte_vhost_user.o 00:24:48.045 CC lib/iscsi/iscsi_rpc.o 00:24:48.045 CC lib/iscsi/task.o 00:24:48.301 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:24:48.301 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:24:48.301 CC lib/ftl/mngt/ftl_mngt_band.o 00:24:48.557 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:24:48.557 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:24:48.557 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:24:48.557 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:24:48.557 CC lib/ftl/utils/ftl_conf.o 00:24:48.814 CC lib/ftl/utils/ftl_md.o 00:24:48.814 CC lib/ftl/utils/ftl_mempool.o 00:24:48.814 CC lib/ftl/utils/ftl_bitmap.o 00:24:48.814 CC lib/ftl/utils/ftl_property.o 00:24:49.070 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:24:49.070 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:24:49.070 LIB libspdk_iscsi.a 00:24:49.070 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:24:49.070 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:24:49.070 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:24:49.070 SO libspdk_iscsi.so.8.0 00:24:49.326 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:24:49.326 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:24:49.326 LIB libspdk_vhost.a 00:24:49.326 CC lib/ftl/upgrade/ftl_sb_v3.o 00:24:49.326 CC lib/ftl/upgrade/ftl_sb_v5.o 00:24:49.326 CC lib/ftl/nvc/ftl_nvc_dev.o 00:24:49.326 SO libspdk_vhost.so.8.0 00:24:49.326 SYMLINK libspdk_iscsi.so 00:24:49.326 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:24:49.584 LIB libspdk_nvmf.a 00:24:49.584 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:24:49.584 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:24:49.584 CC lib/ftl/base/ftl_base_dev.o 00:24:49.584 CC lib/ftl/base/ftl_base_bdev.o 00:24:49.584 SYMLINK libspdk_vhost.so 00:24:49.584 CC lib/ftl/ftl_trace.o 00:24:49.584 SO libspdk_nvmf.so.20.0 00:24:50.157 LIB libspdk_ftl.a 00:24:50.157 SYMLINK libspdk_nvmf.so 00:24:50.420 SO libspdk_ftl.so.9.0 00:24:50.678 SYMLINK libspdk_ftl.so 00:24:51.244 CC module/env_dpdk/env_dpdk_rpc.o 00:24:51.244 CC module/accel/dsa/accel_dsa.o 00:24:51.244 CC module/keyring/file/keyring.o 00:24:51.244 CC module/accel/iaa/accel_iaa.o 00:24:51.244 CC module/accel/error/accel_error.o 00:24:51.244 CC module/sock/posix/posix.o 00:24:51.244 CC module/scheduler/dynamic/scheduler_dynamic.o 00:24:51.244 CC module/blob/bdev/blob_bdev.o 00:24:51.244 CC module/accel/ioat/accel_ioat.o 00:24:51.244 CC module/fsdev/aio/fsdev_aio.o 00:24:51.244 LIB libspdk_env_dpdk_rpc.a 00:24:51.501 SO libspdk_env_dpdk_rpc.so.6.0 00:24:51.501 SYMLINK libspdk_env_dpdk_rpc.so 00:24:51.501 CC module/accel/iaa/accel_iaa_rpc.o 00:24:51.501 CC module/fsdev/aio/fsdev_aio_rpc.o 00:24:51.501 CC module/accel/error/accel_error_rpc.o 00:24:51.501 CC module/keyring/file/keyring_rpc.o 00:24:51.501 LIB libspdk_scheduler_dynamic.a 00:24:51.502 CC module/accel/ioat/accel_ioat_rpc.o 00:24:51.759 SO libspdk_scheduler_dynamic.so.4.0 00:24:51.759 LIB libspdk_accel_iaa.a 00:24:51.759 CC module/accel/dsa/accel_dsa_rpc.o 00:24:51.759 LIB libspdk_blob_bdev.a 00:24:51.759 SYMLINK libspdk_scheduler_dynamic.so 00:24:51.759 LIB libspdk_keyring_file.a 00:24:51.759 SO libspdk_accel_iaa.so.3.0 00:24:51.759 LIB libspdk_accel_error.a 00:24:51.759 CC module/fsdev/aio/linux_aio_mgr.o 00:24:51.759 LIB libspdk_accel_ioat.a 00:24:51.759 SO libspdk_blob_bdev.so.12.0 00:24:51.759 SO libspdk_keyring_file.so.2.0 00:24:51.759 SO libspdk_accel_error.so.2.0 00:24:52.017 SO libspdk_accel_ioat.so.6.0 00:24:52.017 SYMLINK libspdk_accel_iaa.so 00:24:52.017 SYMLINK libspdk_blob_bdev.so 00:24:52.017 SYMLINK libspdk_accel_error.so 00:24:52.017 SYMLINK libspdk_accel_ioat.so 00:24:52.017 SYMLINK libspdk_keyring_file.so 00:24:52.017 LIB libspdk_accel_dsa.a 00:24:52.017 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:24:52.017 SO libspdk_accel_dsa.so.5.0 00:24:52.274 CC module/scheduler/gscheduler/gscheduler.o 00:24:52.274 CC module/keyring/linux/keyring.o 00:24:52.274 SYMLINK libspdk_accel_dsa.so 00:24:52.274 LIB libspdk_scheduler_dpdk_governor.a 00:24:52.274 SO libspdk_scheduler_dpdk_governor.so.4.0 00:24:52.274 CC module/bdev/error/vbdev_error.o 00:24:52.274 CC module/bdev/gpt/gpt.o 00:24:52.274 CC module/bdev/delay/vbdev_delay.o 00:24:52.274 CC module/blobfs/bdev/blobfs_bdev.o 00:24:52.531 LIB libspdk_sock_posix.a 00:24:52.531 LIB libspdk_scheduler_gscheduler.a 00:24:52.531 CC module/keyring/linux/keyring_rpc.o 00:24:52.531 LIB libspdk_fsdev_aio.a 00:24:52.531 SYMLINK libspdk_scheduler_dpdk_governor.so 00:24:52.531 CC module/bdev/gpt/vbdev_gpt.o 00:24:52.531 SO libspdk_scheduler_gscheduler.so.4.0 00:24:52.531 SO libspdk_sock_posix.so.6.0 00:24:52.531 CC module/bdev/lvol/vbdev_lvol.o 00:24:52.531 SO libspdk_fsdev_aio.so.1.0 00:24:52.531 SYMLINK libspdk_scheduler_gscheduler.so 00:24:52.531 CC module/bdev/error/vbdev_error_rpc.o 00:24:52.531 SYMLINK libspdk_fsdev_aio.so 00:24:52.532 LIB libspdk_keyring_linux.a 00:24:52.532 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:24:52.532 SYMLINK libspdk_sock_posix.so 00:24:52.791 SO libspdk_keyring_linux.so.1.0 00:24:52.791 SYMLINK libspdk_keyring_linux.so 00:24:52.791 LIB libspdk_bdev_error.a 00:24:52.791 LIB libspdk_blobfs_bdev.a 00:24:52.791 LIB libspdk_bdev_gpt.a 00:24:52.791 SO libspdk_bdev_error.so.6.0 00:24:52.791 SO libspdk_bdev_gpt.so.6.0 00:24:52.791 SO libspdk_blobfs_bdev.so.6.0 00:24:52.791 CC module/bdev/malloc/bdev_malloc.o 00:24:52.791 CC module/bdev/delay/vbdev_delay_rpc.o 00:24:52.791 CC module/bdev/null/bdev_null.o 00:24:53.050 SYMLINK libspdk_bdev_error.so 00:24:53.050 CC module/bdev/nvme/bdev_nvme.o 00:24:53.050 SYMLINK libspdk_blobfs_bdev.so 00:24:53.050 CC module/bdev/nvme/bdev_nvme_rpc.o 00:24:53.050 SYMLINK libspdk_bdev_gpt.so 00:24:53.050 CC module/bdev/raid/bdev_raid.o 00:24:53.050 CC module/bdev/null/bdev_null_rpc.o 00:24:53.050 CC module/bdev/passthru/vbdev_passthru.o 00:24:53.050 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:24:53.050 LIB libspdk_bdev_delay.a 00:24:53.309 CC module/bdev/split/vbdev_split.o 00:24:53.309 SO libspdk_bdev_delay.so.6.0 00:24:53.309 CC module/bdev/split/vbdev_split_rpc.o 00:24:53.309 LIB libspdk_bdev_null.a 00:24:53.309 SYMLINK libspdk_bdev_delay.so 00:24:53.309 CC module/bdev/malloc/bdev_malloc_rpc.o 00:24:53.309 SO libspdk_bdev_null.so.6.0 00:24:53.309 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:24:53.569 SYMLINK libspdk_bdev_null.so 00:24:53.569 CC module/bdev/nvme/nvme_rpc.o 00:24:53.569 CC module/bdev/raid/bdev_raid_rpc.o 00:24:53.569 CC module/bdev/nvme/bdev_mdns_client.o 00:24:53.569 LIB libspdk_bdev_split.a 00:24:53.569 LIB libspdk_bdev_malloc.a 00:24:53.569 SO libspdk_bdev_split.so.6.0 00:24:53.569 SO libspdk_bdev_malloc.so.6.0 00:24:53.569 SYMLINK libspdk_bdev_split.so 00:24:53.569 SYMLINK libspdk_bdev_malloc.so 00:24:53.569 LIB libspdk_bdev_passthru.a 00:24:53.569 CC module/bdev/nvme/vbdev_opal.o 00:24:53.867 SO libspdk_bdev_passthru.so.6.0 00:24:53.867 LIB libspdk_bdev_lvol.a 00:24:53.867 SO libspdk_bdev_lvol.so.6.0 00:24:53.867 SYMLINK libspdk_bdev_passthru.so 00:24:53.867 CC module/bdev/raid/bdev_raid_sb.o 00:24:53.867 SYMLINK libspdk_bdev_lvol.so 00:24:53.867 CC module/bdev/zone_block/vbdev_zone_block.o 00:24:53.867 CC module/bdev/xnvme/bdev_xnvme.o 00:24:54.152 CC module/bdev/aio/bdev_aio.o 00:24:54.152 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:24:54.152 CC module/bdev/ftl/bdev_ftl.o 00:24:54.152 CC module/bdev/iscsi/bdev_iscsi.o 00:24:54.152 CC module/bdev/virtio/bdev_virtio_scsi.o 00:24:54.152 CC module/bdev/virtio/bdev_virtio_blk.o 00:24:54.152 CC module/bdev/virtio/bdev_virtio_rpc.o 00:24:54.152 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:24:54.410 LIB libspdk_bdev_zone_block.a 00:24:54.410 SO libspdk_bdev_zone_block.so.6.0 00:24:54.410 CC module/bdev/aio/bdev_aio_rpc.o 00:24:54.410 CC module/bdev/raid/raid0.o 00:24:54.410 CC module/bdev/ftl/bdev_ftl_rpc.o 00:24:54.410 LIB libspdk_bdev_xnvme.a 00:24:54.410 SYMLINK libspdk_bdev_zone_block.so 00:24:54.410 CC module/bdev/nvme/vbdev_opal_rpc.o 00:24:54.410 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:24:54.410 SO libspdk_bdev_xnvme.so.3.0 00:24:54.410 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:24:54.668 CC module/bdev/raid/raid1.o 00:24:54.668 LIB libspdk_bdev_aio.a 00:24:54.668 SYMLINK libspdk_bdev_xnvme.so 00:24:54.668 CC module/bdev/raid/concat.o 00:24:54.668 SO libspdk_bdev_aio.so.6.0 00:24:54.668 LIB libspdk_bdev_iscsi.a 00:24:54.668 SYMLINK libspdk_bdev_aio.so 00:24:54.668 LIB libspdk_bdev_ftl.a 00:24:54.668 SO libspdk_bdev_iscsi.so.6.0 00:24:54.927 SO libspdk_bdev_ftl.so.6.0 00:24:54.927 SYMLINK libspdk_bdev_iscsi.so 00:24:54.927 SYMLINK libspdk_bdev_ftl.so 00:24:54.927 LIB libspdk_bdev_virtio.a 00:24:54.927 LIB libspdk_bdev_raid.a 00:24:54.927 SO libspdk_bdev_virtio.so.6.0 00:24:54.927 SO libspdk_bdev_raid.so.6.0 00:24:55.186 SYMLINK libspdk_bdev_virtio.so 00:24:55.186 SYMLINK libspdk_bdev_raid.so 00:24:56.563 LIB libspdk_bdev_nvme.a 00:24:56.820 SO libspdk_bdev_nvme.so.7.1 00:24:56.820 SYMLINK libspdk_bdev_nvme.so 00:24:57.753 CC module/event/subsystems/scheduler/scheduler.o 00:24:57.753 CC module/event/subsystems/keyring/keyring.o 00:24:57.753 CC module/event/subsystems/sock/sock.o 00:24:57.753 CC module/event/subsystems/iobuf/iobuf.o 00:24:57.753 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:24:57.753 CC module/event/subsystems/fsdev/fsdev.o 00:24:57.753 CC module/event/subsystems/vmd/vmd.o 00:24:57.753 CC module/event/subsystems/vmd/vmd_rpc.o 00:24:57.753 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:24:57.753 LIB libspdk_event_scheduler.a 00:24:57.753 LIB libspdk_event_keyring.a 00:24:57.753 SO libspdk_event_scheduler.so.4.0 00:24:57.753 LIB libspdk_event_fsdev.a 00:24:57.753 LIB libspdk_event_vhost_blk.a 00:24:57.753 LIB libspdk_event_vmd.a 00:24:57.753 LIB libspdk_event_sock.a 00:24:57.753 SO libspdk_event_keyring.so.1.0 00:24:57.753 LIB libspdk_event_iobuf.a 00:24:57.753 SO libspdk_event_fsdev.so.1.0 00:24:57.753 SO libspdk_event_vhost_blk.so.3.0 00:24:57.753 SO libspdk_event_sock.so.5.0 00:24:57.753 SO libspdk_event_vmd.so.6.0 00:24:57.753 SYMLINK libspdk_event_scheduler.so 00:24:57.753 SO libspdk_event_iobuf.so.3.0 00:24:57.753 SYMLINK libspdk_event_keyring.so 00:24:57.753 SYMLINK libspdk_event_fsdev.so 00:24:57.753 SYMLINK libspdk_event_vhost_blk.so 00:24:58.011 SYMLINK libspdk_event_vmd.so 00:24:58.011 SYMLINK libspdk_event_sock.so 00:24:58.011 SYMLINK libspdk_event_iobuf.so 00:24:58.270 CC module/event/subsystems/accel/accel.o 00:24:58.529 LIB libspdk_event_accel.a 00:24:58.529 SO libspdk_event_accel.so.6.0 00:24:58.529 SYMLINK libspdk_event_accel.so 00:24:59.096 CC module/event/subsystems/bdev/bdev.o 00:24:59.096 LIB libspdk_event_bdev.a 00:24:59.096 SO libspdk_event_bdev.so.6.0 00:24:59.355 SYMLINK libspdk_event_bdev.so 00:24:59.613 CC module/event/subsystems/nbd/nbd.o 00:24:59.613 CC module/event/subsystems/scsi/scsi.o 00:24:59.613 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:24:59.613 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:24:59.613 CC module/event/subsystems/ublk/ublk.o 00:24:59.874 LIB libspdk_event_nbd.a 00:24:59.874 LIB libspdk_event_ublk.a 00:24:59.874 SO libspdk_event_nbd.so.6.0 00:24:59.874 SO libspdk_event_ublk.so.3.0 00:24:59.874 LIB libspdk_event_scsi.a 00:24:59.874 SO libspdk_event_scsi.so.6.0 00:24:59.874 SYMLINK libspdk_event_nbd.so 00:24:59.874 SYMLINK libspdk_event_ublk.so 00:24:59.874 LIB libspdk_event_nvmf.a 00:24:59.874 SYMLINK libspdk_event_scsi.so 00:25:00.133 SO libspdk_event_nvmf.so.6.0 00:25:00.133 SYMLINK libspdk_event_nvmf.so 00:25:00.391 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:25:00.391 CC module/event/subsystems/iscsi/iscsi.o 00:25:00.649 LIB libspdk_event_vhost_scsi.a 00:25:00.649 LIB libspdk_event_iscsi.a 00:25:00.649 SO libspdk_event_vhost_scsi.so.3.0 00:25:00.649 SO libspdk_event_iscsi.so.6.0 00:25:00.649 SYMLINK libspdk_event_vhost_scsi.so 00:25:00.649 SYMLINK libspdk_event_iscsi.so 00:25:00.907 SO libspdk.so.6.0 00:25:00.907 SYMLINK libspdk.so 00:25:01.165 CXX app/trace/trace.o 00:25:01.165 CC app/trace_record/trace_record.o 00:25:01.165 CC examples/interrupt_tgt/interrupt_tgt.o 00:25:01.165 CC app/nvmf_tgt/nvmf_main.o 00:25:01.424 CC app/iscsi_tgt/iscsi_tgt.o 00:25:01.424 CC app/spdk_tgt/spdk_tgt.o 00:25:01.424 CC examples/util/zipf/zipf.o 00:25:01.424 CC test/thread/poller_perf/poller_perf.o 00:25:01.424 CC examples/ioat/perf/perf.o 00:25:01.424 CC test/dma/test_dma/test_dma.o 00:25:01.684 LINK nvmf_tgt 00:25:01.684 LINK interrupt_tgt 00:25:01.684 LINK zipf 00:25:01.684 LINK poller_perf 00:25:01.684 LINK iscsi_tgt 00:25:01.684 LINK spdk_tgt 00:25:01.684 LINK spdk_trace_record 00:25:01.684 LINK ioat_perf 00:25:01.684 LINK spdk_trace 00:25:01.944 CC examples/ioat/verify/verify.o 00:25:01.944 CC app/spdk_lspci/spdk_lspci.o 00:25:01.944 CC app/spdk_nvme_perf/perf.o 00:25:02.202 CC app/spdk_nvme_identify/identify.o 00:25:02.202 CC app/spdk_nvme_discover/discovery_aer.o 00:25:02.202 LINK test_dma 00:25:02.202 LINK spdk_lspci 00:25:02.202 CC examples/thread/thread/thread_ex.o 00:25:02.202 TEST_HEADER include/spdk/accel.h 00:25:02.202 TEST_HEADER include/spdk/accel_module.h 00:25:02.202 TEST_HEADER include/spdk/assert.h 00:25:02.202 TEST_HEADER include/spdk/barrier.h 00:25:02.202 TEST_HEADER include/spdk/base64.h 00:25:02.202 TEST_HEADER include/spdk/bdev.h 00:25:02.202 TEST_HEADER include/spdk/bdev_module.h 00:25:02.202 CC examples/sock/hello_world/hello_sock.o 00:25:02.202 TEST_HEADER include/spdk/bdev_zone.h 00:25:02.202 TEST_HEADER include/spdk/bit_array.h 00:25:02.202 TEST_HEADER include/spdk/bit_pool.h 00:25:02.202 TEST_HEADER include/spdk/blob_bdev.h 00:25:02.202 TEST_HEADER include/spdk/blobfs_bdev.h 00:25:02.202 TEST_HEADER include/spdk/blobfs.h 00:25:02.202 TEST_HEADER include/spdk/blob.h 00:25:02.202 TEST_HEADER include/spdk/conf.h 00:25:02.202 TEST_HEADER include/spdk/config.h 00:25:02.202 LINK verify 00:25:02.202 TEST_HEADER include/spdk/cpuset.h 00:25:02.202 TEST_HEADER include/spdk/crc16.h 00:25:02.202 TEST_HEADER include/spdk/crc32.h 00:25:02.202 TEST_HEADER include/spdk/crc64.h 00:25:02.202 TEST_HEADER include/spdk/dif.h 00:25:02.202 TEST_HEADER include/spdk/dma.h 00:25:02.202 TEST_HEADER include/spdk/endian.h 00:25:02.202 TEST_HEADER include/spdk/env_dpdk.h 00:25:02.202 TEST_HEADER include/spdk/env.h 00:25:02.202 TEST_HEADER include/spdk/event.h 00:25:02.202 TEST_HEADER include/spdk/fd_group.h 00:25:02.202 TEST_HEADER include/spdk/fd.h 00:25:02.202 TEST_HEADER include/spdk/file.h 00:25:02.202 CC test/app/bdev_svc/bdev_svc.o 00:25:02.202 TEST_HEADER include/spdk/fsdev.h 00:25:02.202 TEST_HEADER include/spdk/fsdev_module.h 00:25:02.202 TEST_HEADER include/spdk/ftl.h 00:25:02.202 TEST_HEADER include/spdk/fuse_dispatcher.h 00:25:02.202 TEST_HEADER include/spdk/gpt_spec.h 00:25:02.202 TEST_HEADER include/spdk/hexlify.h 00:25:02.202 TEST_HEADER include/spdk/histogram_data.h 00:25:02.202 TEST_HEADER include/spdk/idxd.h 00:25:02.462 TEST_HEADER include/spdk/idxd_spec.h 00:25:02.462 TEST_HEADER include/spdk/init.h 00:25:02.462 TEST_HEADER include/spdk/ioat.h 00:25:02.462 TEST_HEADER include/spdk/ioat_spec.h 00:25:02.462 TEST_HEADER include/spdk/iscsi_spec.h 00:25:02.462 TEST_HEADER include/spdk/json.h 00:25:02.462 TEST_HEADER include/spdk/jsonrpc.h 00:25:02.462 TEST_HEADER include/spdk/keyring.h 00:25:02.462 TEST_HEADER include/spdk/keyring_module.h 00:25:02.462 TEST_HEADER include/spdk/likely.h 00:25:02.462 TEST_HEADER include/spdk/log.h 00:25:02.462 TEST_HEADER include/spdk/lvol.h 00:25:02.462 TEST_HEADER include/spdk/md5.h 00:25:02.462 TEST_HEADER include/spdk/memory.h 00:25:02.462 TEST_HEADER include/spdk/mmio.h 00:25:02.462 TEST_HEADER include/spdk/nbd.h 00:25:02.462 TEST_HEADER include/spdk/net.h 00:25:02.462 TEST_HEADER include/spdk/notify.h 00:25:02.462 TEST_HEADER include/spdk/nvme.h 00:25:02.462 TEST_HEADER include/spdk/nvme_intel.h 00:25:02.462 TEST_HEADER include/spdk/nvme_ocssd.h 00:25:02.462 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:25:02.462 TEST_HEADER include/spdk/nvme_spec.h 00:25:02.462 TEST_HEADER include/spdk/nvme_zns.h 00:25:02.462 TEST_HEADER include/spdk/nvmf_cmd.h 00:25:02.462 LINK spdk_nvme_discover 00:25:02.462 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:25:02.462 TEST_HEADER include/spdk/nvmf.h 00:25:02.462 TEST_HEADER include/spdk/nvmf_spec.h 00:25:02.462 TEST_HEADER include/spdk/nvmf_transport.h 00:25:02.462 TEST_HEADER include/spdk/opal.h 00:25:02.462 TEST_HEADER include/spdk/opal_spec.h 00:25:02.462 TEST_HEADER include/spdk/pci_ids.h 00:25:02.462 TEST_HEADER include/spdk/pipe.h 00:25:02.462 TEST_HEADER include/spdk/queue.h 00:25:02.462 TEST_HEADER include/spdk/reduce.h 00:25:02.462 TEST_HEADER include/spdk/rpc.h 00:25:02.462 TEST_HEADER include/spdk/scheduler.h 00:25:02.462 TEST_HEADER include/spdk/scsi.h 00:25:02.462 TEST_HEADER include/spdk/scsi_spec.h 00:25:02.462 TEST_HEADER include/spdk/sock.h 00:25:02.462 TEST_HEADER include/spdk/stdinc.h 00:25:02.462 TEST_HEADER include/spdk/string.h 00:25:02.462 TEST_HEADER include/spdk/thread.h 00:25:02.462 TEST_HEADER include/spdk/trace.h 00:25:02.462 TEST_HEADER include/spdk/trace_parser.h 00:25:02.462 TEST_HEADER include/spdk/tree.h 00:25:02.462 TEST_HEADER include/spdk/ublk.h 00:25:02.462 TEST_HEADER include/spdk/util.h 00:25:02.462 TEST_HEADER include/spdk/uuid.h 00:25:02.462 TEST_HEADER include/spdk/version.h 00:25:02.462 TEST_HEADER include/spdk/vfio_user_pci.h 00:25:02.462 TEST_HEADER include/spdk/vfio_user_spec.h 00:25:02.462 TEST_HEADER include/spdk/vhost.h 00:25:02.462 TEST_HEADER include/spdk/vmd.h 00:25:02.462 TEST_HEADER include/spdk/xor.h 00:25:02.462 TEST_HEADER include/spdk/zipf.h 00:25:02.462 CXX test/cpp_headers/accel.o 00:25:02.462 CXX test/cpp_headers/accel_module.o 00:25:02.462 LINK bdev_svc 00:25:02.462 CC app/spdk_top/spdk_top.o 00:25:02.462 CXX test/cpp_headers/assert.o 00:25:02.720 LINK hello_sock 00:25:02.720 LINK thread 00:25:02.720 CXX test/cpp_headers/barrier.o 00:25:02.720 CXX test/cpp_headers/base64.o 00:25:02.981 CC examples/vmd/lsvmd/lsvmd.o 00:25:02.981 CC examples/idxd/perf/perf.o 00:25:02.981 CC test/app/histogram_perf/histogram_perf.o 00:25:02.981 CC examples/vmd/led/led.o 00:25:02.981 CXX test/cpp_headers/bdev.o 00:25:02.981 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:25:02.981 CC test/env/mem_callbacks/mem_callbacks.o 00:25:02.981 LINK lsvmd 00:25:03.240 LINK spdk_nvme_perf 00:25:03.240 LINK histogram_perf 00:25:03.240 LINK led 00:25:03.240 LINK spdk_nvme_identify 00:25:03.240 CXX test/cpp_headers/bdev_module.o 00:25:03.499 LINK idxd_perf 00:25:03.500 CXX test/cpp_headers/bdev_zone.o 00:25:03.759 CC test/app/jsoncat/jsoncat.o 00:25:03.759 LINK nvme_fuzz 00:25:03.759 CC examples/nvme/hello_world/hello_world.o 00:25:03.759 CC test/event/event_perf/event_perf.o 00:25:03.759 CC examples/fsdev/hello_world/hello_fsdev.o 00:25:03.759 CC test/nvme/aer/aer.o 00:25:03.759 LINK spdk_top 00:25:03.759 CC test/nvme/reset/reset.o 00:25:03.759 LINK mem_callbacks 00:25:03.759 CXX test/cpp_headers/bit_array.o 00:25:03.759 LINK jsoncat 00:25:04.018 LINK event_perf 00:25:04.018 LINK hello_world 00:25:04.018 CXX test/cpp_headers/bit_pool.o 00:25:04.018 LINK hello_fsdev 00:25:04.018 CC test/env/vtophys/vtophys.o 00:25:04.018 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:25:04.018 LINK reset 00:25:04.277 LINK aer 00:25:04.277 CC test/nvme/sgl/sgl.o 00:25:04.277 CC app/vhost/vhost.o 00:25:04.277 CC test/event/reactor/reactor.o 00:25:04.277 CXX test/cpp_headers/blob_bdev.o 00:25:04.277 LINK vtophys 00:25:04.537 CXX test/cpp_headers/blobfs_bdev.o 00:25:04.537 CC examples/nvme/reconnect/reconnect.o 00:25:04.537 LINK vhost 00:25:04.537 LINK reactor 00:25:04.537 CC test/nvme/e2edp/nvme_dp.o 00:25:04.537 CC test/app/stub/stub.o 00:25:04.537 LINK sgl 00:25:04.797 CC test/nvme/overhead/overhead.o 00:25:04.797 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:25:04.797 CXX test/cpp_headers/blobfs.o 00:25:04.797 LINK stub 00:25:04.797 CC test/event/reactor_perf/reactor_perf.o 00:25:04.797 LINK nvme_dp 00:25:04.797 CXX test/cpp_headers/blob.o 00:25:04.797 CC app/spdk_dd/spdk_dd.o 00:25:04.797 LINK env_dpdk_post_init 00:25:04.797 LINK reconnect 00:25:05.055 CC test/env/memory/memory_ut.o 00:25:05.055 LINK reactor_perf 00:25:05.055 LINK overhead 00:25:05.055 CXX test/cpp_headers/conf.o 00:25:05.055 CC test/env/pci/pci_ut.o 00:25:05.314 CC examples/nvme/nvme_manage/nvme_manage.o 00:25:05.314 CC test/nvme/err_injection/err_injection.o 00:25:05.314 CXX test/cpp_headers/config.o 00:25:05.314 CXX test/cpp_headers/cpuset.o 00:25:05.314 CC app/fio/nvme/fio_plugin.o 00:25:05.314 CC test/event/app_repeat/app_repeat.o 00:25:05.314 LINK spdk_dd 00:25:05.572 CC test/nvme/startup/startup.o 00:25:05.572 LINK err_injection 00:25:05.572 CXX test/cpp_headers/crc16.o 00:25:05.572 LINK app_repeat 00:25:05.572 LINK startup 00:25:05.832 LINK pci_ut 00:25:05.832 CXX test/cpp_headers/crc32.o 00:25:05.832 CXX test/cpp_headers/crc64.o 00:25:05.832 CXX test/cpp_headers/dif.o 00:25:06.090 LINK nvme_manage 00:25:06.090 CXX test/cpp_headers/dma.o 00:25:06.090 CC test/event/scheduler/scheduler.o 00:25:06.090 CC test/nvme/reserve/reserve.o 00:25:06.090 CC test/rpc_client/rpc_client_test.o 00:25:06.090 CC test/nvme/simple_copy/simple_copy.o 00:25:06.090 LINK spdk_nvme 00:25:06.090 CXX test/cpp_headers/endian.o 00:25:06.090 CC test/nvme/connect_stress/connect_stress.o 00:25:06.348 LINK rpc_client_test 00:25:06.348 CC examples/nvme/arbitration/arbitration.o 00:25:06.348 LINK scheduler 00:25:06.348 LINK reserve 00:25:06.348 CXX test/cpp_headers/env_dpdk.o 00:25:06.348 LINK simple_copy 00:25:06.348 CC app/fio/bdev/fio_plugin.o 00:25:06.348 LINK memory_ut 00:25:06.606 LINK connect_stress 00:25:06.606 CXX test/cpp_headers/env.o 00:25:06.606 CXX test/cpp_headers/event.o 00:25:06.606 LINK iscsi_fuzz 00:25:06.606 CXX test/cpp_headers/fd_group.o 00:25:06.606 CC examples/nvme/hotplug/hotplug.o 00:25:06.865 LINK arbitration 00:25:06.865 CXX test/cpp_headers/fd.o 00:25:06.865 CC test/nvme/boot_partition/boot_partition.o 00:25:06.865 CC test/accel/dif/dif.o 00:25:06.865 LINK hotplug 00:25:07.123 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:25:07.123 CC examples/accel/perf/accel_perf.o 00:25:07.123 CXX test/cpp_headers/file.o 00:25:07.123 CC test/blobfs/mkfs/mkfs.o 00:25:07.123 LINK spdk_bdev 00:25:07.123 CXX test/cpp_headers/fsdev.o 00:25:07.123 LINK boot_partition 00:25:07.123 CC test/lvol/esnap/esnap.o 00:25:07.123 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:25:07.123 CXX test/cpp_headers/fsdev_module.o 00:25:07.382 LINK mkfs 00:25:07.382 CC test/nvme/compliance/nvme_compliance.o 00:25:07.382 CC examples/nvme/cmb_copy/cmb_copy.o 00:25:07.382 CC test/nvme/fused_ordering/fused_ordering.o 00:25:07.382 CXX test/cpp_headers/ftl.o 00:25:07.641 CC examples/blob/hello_world/hello_blob.o 00:25:07.641 LINK cmb_copy 00:25:07.641 LINK fused_ordering 00:25:07.641 CXX test/cpp_headers/fuse_dispatcher.o 00:25:07.641 CC test/nvme/doorbell_aers/doorbell_aers.o 00:25:07.899 LINK vhost_fuzz 00:25:07.899 LINK dif 00:25:07.899 LINK nvme_compliance 00:25:07.899 LINK hello_blob 00:25:07.899 LINK accel_perf 00:25:07.899 CXX test/cpp_headers/gpt_spec.o 00:25:07.899 LINK doorbell_aers 00:25:08.157 CXX test/cpp_headers/hexlify.o 00:25:08.157 CC examples/nvme/abort/abort.o 00:25:08.157 CC test/nvme/fdp/fdp.o 00:25:08.157 CXX test/cpp_headers/histogram_data.o 00:25:08.157 CXX test/cpp_headers/idxd.o 00:25:08.157 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:25:08.157 CXX test/cpp_headers/idxd_spec.o 00:25:08.416 CC examples/blob/cli/blobcli.o 00:25:08.416 CC test/bdev/bdevio/bdevio.o 00:25:08.416 CXX test/cpp_headers/init.o 00:25:08.416 CC test/nvme/cuse/cuse.o 00:25:08.416 CC examples/bdev/hello_world/hello_bdev.o 00:25:08.416 CXX test/cpp_headers/ioat.o 00:25:08.416 LINK pmr_persistence 00:25:08.416 LINK fdp 00:25:08.673 LINK abort 00:25:08.673 CXX test/cpp_headers/ioat_spec.o 00:25:08.673 CXX test/cpp_headers/iscsi_spec.o 00:25:08.673 LINK hello_bdev 00:25:08.931 CXX test/cpp_headers/json.o 00:25:08.931 CXX test/cpp_headers/jsonrpc.o 00:25:08.931 CXX test/cpp_headers/keyring.o 00:25:08.931 CC examples/bdev/bdevperf/bdevperf.o 00:25:08.931 LINK bdevio 00:25:08.931 CXX test/cpp_headers/keyring_module.o 00:25:08.931 LINK blobcli 00:25:08.931 CXX test/cpp_headers/likely.o 00:25:09.189 CXX test/cpp_headers/log.o 00:25:09.189 CXX test/cpp_headers/lvol.o 00:25:09.189 CXX test/cpp_headers/md5.o 00:25:09.189 CXX test/cpp_headers/memory.o 00:25:09.189 CXX test/cpp_headers/mmio.o 00:25:09.189 CXX test/cpp_headers/nbd.o 00:25:09.189 CXX test/cpp_headers/net.o 00:25:09.189 CXX test/cpp_headers/notify.o 00:25:09.446 CXX test/cpp_headers/nvme.o 00:25:09.446 CXX test/cpp_headers/nvme_intel.o 00:25:09.446 CXX test/cpp_headers/nvme_ocssd.o 00:25:09.446 CXX test/cpp_headers/nvme_ocssd_spec.o 00:25:09.446 CXX test/cpp_headers/nvme_spec.o 00:25:09.446 CXX test/cpp_headers/nvme_zns.o 00:25:09.446 CXX test/cpp_headers/nvmf_cmd.o 00:25:09.446 CXX test/cpp_headers/nvmf_fc_spec.o 00:25:09.783 CXX test/cpp_headers/nvmf.o 00:25:09.783 CXX test/cpp_headers/nvmf_spec.o 00:25:09.783 CXX test/cpp_headers/nvmf_transport.o 00:25:09.783 CXX test/cpp_headers/opal.o 00:25:09.783 CXX test/cpp_headers/opal_spec.o 00:25:09.783 CXX test/cpp_headers/pci_ids.o 00:25:09.783 CXX test/cpp_headers/queue.o 00:25:09.783 CXX test/cpp_headers/pipe.o 00:25:09.783 CXX test/cpp_headers/reduce.o 00:25:09.783 CXX test/cpp_headers/rpc.o 00:25:10.044 CXX test/cpp_headers/scheduler.o 00:25:10.044 CXX test/cpp_headers/scsi.o 00:25:10.044 CXX test/cpp_headers/scsi_spec.o 00:25:10.044 CXX test/cpp_headers/sock.o 00:25:10.044 CXX test/cpp_headers/stdinc.o 00:25:10.044 CXX test/cpp_headers/string.o 00:25:10.044 LINK bdevperf 00:25:10.044 CXX test/cpp_headers/thread.o 00:25:10.044 CXX test/cpp_headers/trace.o 00:25:10.044 CXX test/cpp_headers/trace_parser.o 00:25:10.044 CXX test/cpp_headers/tree.o 00:25:10.303 LINK cuse 00:25:10.303 CXX test/cpp_headers/ublk.o 00:25:10.303 CXX test/cpp_headers/util.o 00:25:10.303 CXX test/cpp_headers/uuid.o 00:25:10.303 CXX test/cpp_headers/version.o 00:25:10.303 CXX test/cpp_headers/vfio_user_pci.o 00:25:10.303 CXX test/cpp_headers/vfio_user_spec.o 00:25:10.303 CXX test/cpp_headers/vhost.o 00:25:10.303 CXX test/cpp_headers/vmd.o 00:25:10.563 CXX test/cpp_headers/xor.o 00:25:10.563 CXX test/cpp_headers/zipf.o 00:25:10.821 CC examples/nvmf/nvmf/nvmf.o 00:25:11.079 LINK nvmf 00:25:15.259 LINK esnap 00:25:15.259 00:25:15.259 real 1m52.212s 00:25:15.259 user 9m43.673s 00:25:15.259 sys 2m24.075s 00:25:15.259 13:20:08 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:25:15.259 13:20:08 make -- common/autotest_common.sh@10 -- $ set +x 00:25:15.260 ************************************ 00:25:15.260 END TEST make 00:25:15.260 ************************************ 00:25:15.260 13:20:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:25:15.260 13:20:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:15.260 13:20:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:15.260 13:20:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:15.260 13:20:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:15.260 13:20:08 -- pm/common@44 -- $ pid=5343 00:25:15.260 13:20:08 -- pm/common@50 -- $ kill -TERM 5343 00:25:15.260 13:20:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:15.260 13:20:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:15.260 13:20:08 -- pm/common@44 -- $ pid=5345 00:25:15.260 13:20:08 -- pm/common@50 -- $ kill -TERM 5345 00:25:15.260 13:20:08 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:25:15.260 13:20:08 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:25:15.519 13:20:08 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:15.519 13:20:08 -- common/autotest_common.sh@1711 -- # lcov --version 00:25:15.519 13:20:08 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:15.519 13:20:08 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:15.519 13:20:08 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.519 13:20:08 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.519 13:20:08 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.519 13:20:08 -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.519 13:20:08 -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.519 13:20:08 -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.519 13:20:08 -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.519 13:20:08 -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.519 13:20:08 -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.519 13:20:08 -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.519 13:20:08 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.519 13:20:08 -- scripts/common.sh@344 -- # case "$op" in 00:25:15.519 13:20:08 -- scripts/common.sh@345 -- # : 1 00:25:15.519 13:20:08 -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.519 13:20:08 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.519 13:20:08 -- scripts/common.sh@365 -- # decimal 1 00:25:15.519 13:20:08 -- scripts/common.sh@353 -- # local d=1 00:25:15.519 13:20:08 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.519 13:20:08 -- scripts/common.sh@355 -- # echo 1 00:25:15.519 13:20:08 -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.519 13:20:08 -- scripts/common.sh@366 -- # decimal 2 00:25:15.519 13:20:08 -- scripts/common.sh@353 -- # local d=2 00:25:15.519 13:20:08 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.519 13:20:08 -- scripts/common.sh@355 -- # echo 2 00:25:15.519 13:20:08 -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.519 13:20:08 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.519 13:20:08 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.519 13:20:08 -- scripts/common.sh@368 -- # return 0 00:25:15.519 13:20:08 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.519 13:20:08 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:15.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.519 --rc genhtml_branch_coverage=1 00:25:15.519 --rc genhtml_function_coverage=1 00:25:15.519 --rc genhtml_legend=1 00:25:15.519 --rc geninfo_all_blocks=1 00:25:15.519 --rc geninfo_unexecuted_blocks=1 00:25:15.519 00:25:15.519 ' 00:25:15.519 13:20:08 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:15.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.519 --rc genhtml_branch_coverage=1 00:25:15.519 --rc genhtml_function_coverage=1 00:25:15.519 --rc genhtml_legend=1 00:25:15.519 --rc geninfo_all_blocks=1 00:25:15.519 --rc geninfo_unexecuted_blocks=1 00:25:15.519 00:25:15.519 ' 00:25:15.519 13:20:08 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:15.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.519 --rc genhtml_branch_coverage=1 00:25:15.519 --rc genhtml_function_coverage=1 00:25:15.519 --rc genhtml_legend=1 00:25:15.519 --rc geninfo_all_blocks=1 00:25:15.519 --rc geninfo_unexecuted_blocks=1 00:25:15.519 00:25:15.519 ' 00:25:15.519 13:20:08 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:15.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.519 --rc genhtml_branch_coverage=1 00:25:15.519 --rc genhtml_function_coverage=1 00:25:15.519 --rc genhtml_legend=1 00:25:15.519 --rc geninfo_all_blocks=1 00:25:15.519 --rc geninfo_unexecuted_blocks=1 00:25:15.519 00:25:15.519 ' 00:25:15.519 13:20:08 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:15.519 13:20:08 -- nvmf/common.sh@7 -- # uname -s 00:25:15.519 13:20:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.519 13:20:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.519 13:20:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.519 13:20:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.519 13:20:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.519 13:20:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.519 13:20:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.519 13:20:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.519 13:20:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.519 13:20:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.519 13:20:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c818d8d4-9664-40ed-b0e6-117acd044092 00:25:15.519 13:20:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=c818d8d4-9664-40ed-b0e6-117acd044092 00:25:15.519 13:20:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.519 13:20:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.519 13:20:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:15.519 13:20:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.519 13:20:08 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:15.519 13:20:08 -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.519 13:20:08 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.519 13:20:08 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.519 13:20:08 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.519 13:20:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.519 13:20:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.519 13:20:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.519 13:20:08 -- paths/export.sh@5 -- # export PATH 00:25:15.519 13:20:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.519 13:20:08 -- nvmf/common.sh@51 -- # : 0 00:25:15.519 13:20:08 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.519 13:20:08 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.519 13:20:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.519 13:20:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.519 13:20:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.519 13:20:08 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.519 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.519 13:20:08 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.519 13:20:08 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.519 13:20:08 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.519 13:20:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:25:15.519 13:20:08 -- spdk/autotest.sh@32 -- # uname -s 00:25:15.519 13:20:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:25:15.519 13:20:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:25:15.519 13:20:08 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:25:15.519 13:20:08 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:25:15.519 13:20:08 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:25:15.519 13:20:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:25:15.519 13:20:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:25:15.519 13:20:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:25:15.519 13:20:08 -- spdk/autotest.sh@48 -- # udevadm_pid=55107 00:25:15.519 13:20:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:25:15.519 13:20:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:25:15.519 13:20:08 -- pm/common@17 -- # local monitor 00:25:15.519 13:20:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:25:15.519 13:20:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:25:15.778 13:20:08 -- pm/common@25 -- # sleep 1 00:25:15.778 13:20:08 -- pm/common@21 -- # date +%s 00:25:15.778 13:20:08 -- pm/common@21 -- # date +%s 00:25:15.778 13:20:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733491208 00:25:15.778 13:20:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733491208 00:25:15.778 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733491208_collect-cpu-load.pm.log 00:25:15.778 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733491208_collect-vmstat.pm.log 00:25:16.714 13:20:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:25:16.714 13:20:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:25:16.714 13:20:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.714 13:20:09 -- common/autotest_common.sh@10 -- # set +x 00:25:16.714 13:20:09 -- spdk/autotest.sh@59 -- # create_test_list 00:25:16.714 13:20:09 -- common/autotest_common.sh@752 -- # xtrace_disable 00:25:16.714 13:20:09 -- common/autotest_common.sh@10 -- # set +x 00:25:16.714 13:20:09 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:25:16.714 13:20:09 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:25:16.714 13:20:09 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:25:16.714 13:20:09 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:25:16.714 13:20:09 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:25:16.714 13:20:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:25:16.714 13:20:09 -- common/autotest_common.sh@1457 -- # uname 00:25:16.714 13:20:09 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:25:16.714 13:20:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:25:16.714 13:20:09 -- common/autotest_common.sh@1477 -- # uname 00:25:16.714 13:20:09 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:25:16.714 13:20:09 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:25:16.714 13:20:09 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:25:16.714 lcov: LCOV version 1.15 00:25:16.714 13:20:09 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:25:38.672 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:25:38.672 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:25:53.546 13:20:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:25:53.546 13:20:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.546 13:20:45 -- common/autotest_common.sh@10 -- # set +x 00:25:53.546 13:20:45 -- spdk/autotest.sh@78 -- # rm -f 00:25:53.546 13:20:45 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:53.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:53.546 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:53.546 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:53.546 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:25:53.546 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:25:53.546 13:20:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:25:53.546 13:20:46 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:25:53.546 13:20:46 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:25:53.546 13:20:46 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:25:53.546 13:20:46 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:25:53.546 13:20:46 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:25:53.546 13:20:46 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:25:53.546 13:20:46 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:25:53.546 13:20:46 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:53.546 13:20:46 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:25:53.546 13:20:46 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:53.546 13:20:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:53.546 13:20:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:53.546 13:20:46 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:25:53.546 13:20:46 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:25:53.546 13:20:46 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:53.546 13:20:46 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:25:53.546 13:20:46 -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:25:53.546 13:20:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:25:53.546 13:20:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:53.546 13:20:46 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:25:53.546 13:20:46 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:25:53.546 13:20:46 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:53.546 13:20:46 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:25:53.546 13:20:46 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:25:53.546 13:20:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:25:53.546 13:20:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:53.546 13:20:46 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:25:53.546 13:20:46 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:25:53.546 13:20:46 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:53.546 13:20:46 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:25:53.546 13:20:46 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:25:53.546 13:20:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:25:53.546 13:20:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:53.546 13:20:46 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:53.546 13:20:46 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n2 00:25:53.546 13:20:46 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:25:53.546 13:20:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:25:53.546 13:20:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:53.546 13:20:46 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:25:53.546 13:20:46 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n3 00:25:53.546 13:20:46 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:25:53.546 13:20:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:25:53.546 13:20:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:53.546 13:20:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:25:53.546 13:20:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:53.546 13:20:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:53.547 13:20:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:25:53.547 13:20:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:25:53.547 13:20:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:25:53.547 No valid GPT data, bailing 00:25:53.547 13:20:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:53.547 13:20:46 -- scripts/common.sh@394 -- # pt= 00:25:53.547 13:20:46 -- scripts/common.sh@395 -- # return 1 00:25:53.547 13:20:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:25:53.547 1+0 records in 00:25:53.547 1+0 records out 00:25:53.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131767 s, 79.6 MB/s 00:25:53.547 13:20:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:53.547 13:20:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:53.547 13:20:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:25:53.547 13:20:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:25:53.547 13:20:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:25:53.547 No valid GPT data, bailing 00:25:53.547 13:20:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:53.824 13:20:46 -- scripts/common.sh@394 -- # pt= 00:25:53.824 13:20:46 -- scripts/common.sh@395 -- # return 1 00:25:53.824 13:20:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:25:53.824 1+0 records in 00:25:53.824 1+0 records out 00:25:53.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00409035 s, 256 MB/s 00:25:53.824 13:20:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:53.824 13:20:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:53.824 13:20:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:25:53.824 13:20:46 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:25:53.824 13:20:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:25:53.824 No valid GPT data, bailing 00:25:53.824 13:20:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:25:53.824 13:20:46 -- scripts/common.sh@394 -- # pt= 00:25:53.824 13:20:46 -- scripts/common.sh@395 -- # return 1 00:25:53.824 13:20:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:25:53.824 1+0 records in 00:25:53.824 1+0 records out 00:25:53.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499665 s, 210 MB/s 00:25:53.824 13:20:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:53.824 13:20:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:53.824 13:20:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:25:53.824 13:20:46 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:25:53.824 13:20:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:25:53.824 No valid GPT data, bailing 00:25:53.824 13:20:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:25:53.824 13:20:46 -- scripts/common.sh@394 -- # pt= 00:25:53.824 13:20:46 -- scripts/common.sh@395 -- # return 1 00:25:53.824 13:20:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:25:53.824 1+0 records in 00:25:53.824 1+0 records out 00:25:53.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634165 s, 165 MB/s 00:25:53.824 13:20:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:53.824 13:20:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:53.824 13:20:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:25:53.824 13:20:46 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:25:53.824 13:20:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:25:53.824 No valid GPT data, bailing 00:25:54.084 13:20:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:25:54.084 13:20:46 -- scripts/common.sh@394 -- # pt= 00:25:54.084 13:20:46 -- scripts/common.sh@395 -- # return 1 00:25:54.084 13:20:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:25:54.084 1+0 records in 00:25:54.084 1+0 records out 00:25:54.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551489 s, 190 MB/s 00:25:54.084 13:20:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:54.084 13:20:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:54.084 13:20:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:25:54.084 13:20:46 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:25:54.084 13:20:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:25:54.084 No valid GPT data, bailing 00:25:54.084 13:20:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:25:54.084 13:20:47 -- scripts/common.sh@394 -- # pt= 00:25:54.084 13:20:47 -- scripts/common.sh@395 -- # return 1 00:25:54.084 13:20:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:25:54.084 1+0 records in 00:25:54.084 1+0 records out 00:25:54.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449425 s, 233 MB/s 00:25:54.084 13:20:47 -- spdk/autotest.sh@105 -- # sync 00:25:54.084 13:20:47 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:25:54.084 13:20:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:25:54.084 13:20:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:25:56.620 13:20:49 -- spdk/autotest.sh@111 -- # uname -s 00:25:56.620 13:20:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:25:56.620 13:20:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:25:56.620 13:20:49 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:25:57.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:57.753 Hugepages 00:25:57.753 node hugesize free / total 00:25:57.753 node0 1048576kB 0 / 0 00:25:57.753 node0 2048kB 0 / 0 00:25:57.753 00:25:57.753 Type BDF Vendor Device NUMA Driver Device Block devices 00:25:58.012 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:25:58.012 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:25:58.012 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:25:58.270 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:25:58.270 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:25:58.270 13:20:51 -- spdk/autotest.sh@117 -- # uname -s 00:25:58.270 13:20:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:25:58.270 13:20:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:25:58.270 13:20:51 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:58.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:59.776 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:59.776 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:59.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:59.776 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:59.776 13:20:52 -- common/autotest_common.sh@1517 -- # sleep 1 00:26:01.155 13:20:53 -- common/autotest_common.sh@1518 -- # bdfs=() 00:26:01.155 13:20:53 -- common/autotest_common.sh@1518 -- # local bdfs 00:26:01.155 13:20:53 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:26:01.155 13:20:53 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:26:01.155 13:20:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:01.155 13:20:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:26:01.155 13:20:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:01.155 13:20:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:01.155 13:20:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:01.155 13:20:53 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:26:01.155 13:20:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:26:01.155 13:20:53 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:01.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:01.673 Waiting for block devices as requested 00:26:01.673 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:01.931 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:01.931 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:26:01.931 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:26:07.194 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:26:07.194 13:21:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:26:07.194 13:21:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:26:07.194 13:21:00 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:26:07.194 13:21:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:26:07.194 13:21:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:26:07.194 13:21:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:26:07.194 13:21:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:26:07.194 13:21:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:26:07.194 13:21:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:26:07.195 13:21:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:26:07.195 13:21:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:26:07.195 13:21:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:26:07.195 13:21:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:26:07.195 13:21:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:26:07.195 13:21:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:26:07.195 13:21:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:26:07.195 13:21:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:26:07.195 13:21:00 -- common/autotest_common.sh@1543 -- # continue 00:26:07.195 13:21:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:26:07.195 13:21:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:26:07.195 13:21:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:26:07.195 13:21:00 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:26:07.195 13:21:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:26:07.195 13:21:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:26:07.195 13:21:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:26:07.195 13:21:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:26:07.195 13:21:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:26:07.195 13:21:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:26:07.195 13:21:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:26:07.195 13:21:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:26:07.195 13:21:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:26:07.195 13:21:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:26:07.195 13:21:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:26:07.195 13:21:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:26:07.195 13:21:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:26:07.195 13:21:00 -- common/autotest_common.sh@1543 -- # continue 00:26:07.195 13:21:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:26:07.195 13:21:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:26:07.195 13:21:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:26:07.195 13:21:00 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:26:07.195 13:21:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:26:07.195 13:21:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:26:07.195 13:21:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:26:07.195 13:21:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:26:07.195 13:21:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:26:07.195 13:21:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:26:07.195 13:21:00 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:26:07.195 13:21:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:26:07.195 13:21:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:26:07.195 13:21:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:26:07.195 13:21:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:26:07.195 13:21:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:26:07.453 13:21:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:26:07.453 13:21:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:26:07.453 13:21:00 -- common/autotest_common.sh@1543 -- # continue 00:26:07.453 13:21:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:26:07.453 13:21:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:26:07.453 13:21:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:26:07.453 13:21:00 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:26:07.453 13:21:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:26:07.453 13:21:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:26:07.453 13:21:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:26:07.453 13:21:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:26:07.453 13:21:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:26:07.453 13:21:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:26:07.453 13:21:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:26:07.453 13:21:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:26:07.453 13:21:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:26:07.453 13:21:00 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:26:07.453 13:21:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:26:07.453 13:21:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:26:07.453 13:21:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:26:07.453 13:21:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:26:07.453 13:21:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:26:07.453 13:21:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:26:07.453 13:21:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:26:07.453 13:21:00 -- common/autotest_common.sh@1543 -- # continue 00:26:07.453 13:21:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:26:07.453 13:21:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.453 13:21:00 -- common/autotest_common.sh@10 -- # set +x 00:26:07.453 13:21:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:26:07.453 13:21:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.453 13:21:00 -- common/autotest_common.sh@10 -- # set +x 00:26:07.453 13:21:00 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:08.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:09.037 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:26:09.037 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:09.037 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:09.037 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:26:09.037 13:21:01 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:26:09.037 13:21:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:09.037 13:21:01 -- common/autotest_common.sh@10 -- # set +x 00:26:09.037 13:21:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:26:09.037 13:21:02 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:26:09.037 13:21:02 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:26:09.037 13:21:02 -- common/autotest_common.sh@1563 -- # bdfs=() 00:26:09.037 13:21:02 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:26:09.037 13:21:02 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:26:09.037 13:21:02 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:26:09.037 13:21:02 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:26:09.037 13:21:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:09.038 13:21:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:26:09.038 13:21:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:09.038 13:21:02 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:09.038 13:21:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:09.038 13:21:02 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:26:09.038 13:21:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:26:09.038 13:21:02 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:26:09.038 13:21:02 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:26:09.038 13:21:02 -- common/autotest_common.sh@1566 -- # device=0x0010 00:26:09.038 13:21:02 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:26:09.038 13:21:02 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:26:09.038 13:21:02 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:26:09.038 13:21:02 -- common/autotest_common.sh@1566 -- # device=0x0010 00:26:09.038 13:21:02 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:26:09.038 13:21:02 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:26:09.038 13:21:02 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:26:09.038 13:21:02 -- common/autotest_common.sh@1566 -- # device=0x0010 00:26:09.038 13:21:02 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:26:09.038 13:21:02 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:26:09.038 13:21:02 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:26:09.326 13:21:02 -- common/autotest_common.sh@1566 -- # device=0x0010 00:26:09.326 13:21:02 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:26:09.326 13:21:02 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:26:09.326 13:21:02 -- common/autotest_common.sh@1572 -- # return 0 00:26:09.326 13:21:02 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:26:09.326 13:21:02 -- common/autotest_common.sh@1580 -- # return 0 00:26:09.326 13:21:02 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:26:09.326 13:21:02 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:26:09.326 13:21:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:26:09.326 13:21:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:26:09.326 13:21:02 -- spdk/autotest.sh@149 -- # timing_enter lib 00:26:09.326 13:21:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:09.326 13:21:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.326 13:21:02 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:26:09.326 13:21:02 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:26:09.326 13:21:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:09.326 13:21:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.326 13:21:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.326 ************************************ 00:26:09.326 START TEST env 00:26:09.326 ************************************ 00:26:09.326 13:21:02 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:26:09.326 * Looking for test storage... 00:26:09.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:26:09.326 13:21:02 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:09.327 13:21:02 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:09.327 13:21:02 env -- common/autotest_common.sh@1711 -- # lcov --version 00:26:09.327 13:21:02 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:09.327 13:21:02 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:09.327 13:21:02 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.327 13:21:02 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.327 13:21:02 env -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.327 13:21:02 env -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.327 13:21:02 env -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.327 13:21:02 env -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.327 13:21:02 env -- scripts/common.sh@338 -- # local 'op=<' 00:26:09.327 13:21:02 env -- scripts/common.sh@340 -- # ver1_l=2 00:26:09.327 13:21:02 env -- scripts/common.sh@341 -- # ver2_l=1 00:26:09.327 13:21:02 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.327 13:21:02 env -- scripts/common.sh@344 -- # case "$op" in 00:26:09.327 13:21:02 env -- scripts/common.sh@345 -- # : 1 00:26:09.327 13:21:02 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.327 13:21:02 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.327 13:21:02 env -- scripts/common.sh@365 -- # decimal 1 00:26:09.327 13:21:02 env -- scripts/common.sh@353 -- # local d=1 00:26:09.327 13:21:02 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.327 13:21:02 env -- scripts/common.sh@355 -- # echo 1 00:26:09.327 13:21:02 env -- scripts/common.sh@365 -- # ver1[v]=1 00:26:09.327 13:21:02 env -- scripts/common.sh@366 -- # decimal 2 00:26:09.327 13:21:02 env -- scripts/common.sh@353 -- # local d=2 00:26:09.327 13:21:02 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.327 13:21:02 env -- scripts/common.sh@355 -- # echo 2 00:26:09.327 13:21:02 env -- scripts/common.sh@366 -- # ver2[v]=2 00:26:09.327 13:21:02 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.327 13:21:02 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:09.327 13:21:02 env -- scripts/common.sh@368 -- # return 0 00:26:09.327 13:21:02 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.327 13:21:02 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:09.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.327 --rc genhtml_branch_coverage=1 00:26:09.327 --rc genhtml_function_coverage=1 00:26:09.327 --rc genhtml_legend=1 00:26:09.327 --rc geninfo_all_blocks=1 00:26:09.327 --rc geninfo_unexecuted_blocks=1 00:26:09.327 00:26:09.327 ' 00:26:09.327 13:21:02 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:09.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.327 --rc genhtml_branch_coverage=1 00:26:09.327 --rc genhtml_function_coverage=1 00:26:09.327 --rc genhtml_legend=1 00:26:09.327 --rc geninfo_all_blocks=1 00:26:09.327 --rc geninfo_unexecuted_blocks=1 00:26:09.327 00:26:09.327 ' 00:26:09.327 13:21:02 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:09.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.327 --rc genhtml_branch_coverage=1 00:26:09.327 --rc genhtml_function_coverage=1 00:26:09.327 --rc genhtml_legend=1 00:26:09.327 --rc geninfo_all_blocks=1 00:26:09.327 --rc geninfo_unexecuted_blocks=1 00:26:09.327 00:26:09.327 ' 00:26:09.327 13:21:02 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:09.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.327 --rc genhtml_branch_coverage=1 00:26:09.327 --rc genhtml_function_coverage=1 00:26:09.327 --rc genhtml_legend=1 00:26:09.327 --rc geninfo_all_blocks=1 00:26:09.327 --rc geninfo_unexecuted_blocks=1 00:26:09.327 00:26:09.327 ' 00:26:09.327 13:21:02 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:26:09.327 13:21:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:09.327 13:21:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.327 13:21:02 env -- common/autotest_common.sh@10 -- # set +x 00:26:09.327 ************************************ 00:26:09.327 START TEST env_memory 00:26:09.327 ************************************ 00:26:09.327 13:21:02 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:26:09.327 00:26:09.327 00:26:09.327 CUnit - A unit testing framework for C - Version 2.1-3 00:26:09.327 http://cunit.sourceforge.net/ 00:26:09.327 00:26:09.327 00:26:09.327 Suite: memory 00:26:09.585 Test: alloc and free memory map ...[2024-12-06 13:21:02.469188] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:26:09.585 passed 00:26:09.585 Test: mem map translation ...[2024-12-06 13:21:02.542241] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:26:09.585 [2024-12-06 13:21:02.542363] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:26:09.585 [2024-12-06 13:21:02.542500] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:26:09.585 [2024-12-06 13:21:02.542554] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:26:09.585 passed 00:26:09.585 Test: mem map registration ...[2024-12-06 13:21:02.658104] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:26:09.585 [2024-12-06 13:21:02.658250] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:26:09.843 passed 00:26:09.843 Test: mem map adjacent registrations ...passed 00:26:09.843 00:26:09.843 Run Summary: Type Total Ran Passed Failed Inactive 00:26:09.843 suites 1 1 n/a 0 0 00:26:09.843 tests 4 4 4 0 0 00:26:09.843 asserts 152 152 152 0 n/a 00:26:09.843 00:26:09.843 Elapsed time = 0.358 seconds 00:26:09.843 ************************************ 00:26:09.843 END TEST env_memory 00:26:09.843 ************************************ 00:26:09.843 00:26:09.843 real 0m0.410s 00:26:09.843 user 0m0.370s 00:26:09.843 sys 0m0.029s 00:26:09.843 13:21:02 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.843 13:21:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:26:09.843 13:21:02 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:26:09.843 13:21:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:09.843 13:21:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.843 13:21:02 env -- common/autotest_common.sh@10 -- # set +x 00:26:09.843 ************************************ 00:26:09.843 START TEST env_vtophys 00:26:09.843 ************************************ 00:26:09.843 13:21:02 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:26:09.843 EAL: lib.eal log level changed from notice to debug 00:26:09.843 EAL: Detected lcore 0 as core 0 on socket 0 00:26:09.843 EAL: Detected lcore 1 as core 0 on socket 0 00:26:09.843 EAL: Detected lcore 2 as core 0 on socket 0 00:26:09.843 EAL: Detected lcore 3 as core 0 on socket 0 00:26:09.843 EAL: Detected lcore 4 as core 0 on socket 0 00:26:09.844 EAL: Detected lcore 5 as core 0 on socket 0 00:26:09.844 EAL: Detected lcore 6 as core 0 on socket 0 00:26:09.844 EAL: Detected lcore 7 as core 0 on socket 0 00:26:09.844 EAL: Detected lcore 8 as core 0 on socket 0 00:26:09.844 EAL: Detected lcore 9 as core 0 on socket 0 00:26:10.101 EAL: Maximum logical cores by configuration: 128 00:26:10.101 EAL: Detected CPU lcores: 10 00:26:10.101 EAL: Detected NUMA nodes: 1 00:26:10.101 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:26:10.101 EAL: Detected shared linkage of DPDK 00:26:10.101 EAL: No shared files mode enabled, IPC will be disabled 00:26:10.101 EAL: Selected IOVA mode 'PA' 00:26:10.101 EAL: Probing VFIO support... 00:26:10.101 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:26:10.101 EAL: VFIO modules not loaded, skipping VFIO support... 00:26:10.101 EAL: Ask a virtual area of 0x2e000 bytes 00:26:10.101 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:26:10.101 EAL: Setting up physically contiguous memory... 00:26:10.101 EAL: Setting maximum number of open files to 524288 00:26:10.101 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:26:10.101 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:26:10.101 EAL: Ask a virtual area of 0x61000 bytes 00:26:10.101 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:26:10.101 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:10.101 EAL: Ask a virtual area of 0x400000000 bytes 00:26:10.101 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:26:10.101 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:26:10.101 EAL: Ask a virtual area of 0x61000 bytes 00:26:10.101 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:26:10.101 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:10.101 EAL: Ask a virtual area of 0x400000000 bytes 00:26:10.101 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:26:10.101 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:26:10.101 EAL: Ask a virtual area of 0x61000 bytes 00:26:10.102 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:26:10.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:10.102 EAL: Ask a virtual area of 0x400000000 bytes 00:26:10.102 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:26:10.102 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:26:10.102 EAL: Ask a virtual area of 0x61000 bytes 00:26:10.102 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:26:10.102 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:10.102 EAL: Ask a virtual area of 0x400000000 bytes 00:26:10.102 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:26:10.102 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:26:10.102 EAL: Hugepages will be freed exactly as allocated. 00:26:10.102 EAL: No shared files mode enabled, IPC is disabled 00:26:10.102 EAL: No shared files mode enabled, IPC is disabled 00:26:10.102 EAL: TSC frequency is ~2100000 KHz 00:26:10.102 EAL: Main lcore 0 is ready (tid=7f4b002cba40;cpuset=[0]) 00:26:10.102 EAL: Trying to obtain current memory policy. 00:26:10.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:10.102 EAL: Restoring previous memory policy: 0 00:26:10.102 EAL: request: mp_malloc_sync 00:26:10.102 EAL: No shared files mode enabled, IPC is disabled 00:26:10.102 EAL: Heap on socket 0 was expanded by 2MB 00:26:10.102 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:26:10.102 EAL: No PCI address specified using 'addr=' in: bus=pci 00:26:10.102 EAL: Mem event callback 'spdk:(nil)' registered 00:26:10.102 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:26:10.102 00:26:10.102 00:26:10.102 CUnit - A unit testing framework for C - Version 2.1-3 00:26:10.102 http://cunit.sourceforge.net/ 00:26:10.102 00:26:10.102 00:26:10.102 Suite: components_suite 00:26:11.044 Test: vtophys_malloc_test ...passed 00:26:11.044 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:26:11.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:11.044 EAL: Restoring previous memory policy: 4 00:26:11.044 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.044 EAL: request: mp_malloc_sync 00:26:11.044 EAL: No shared files mode enabled, IPC is disabled 00:26:11.044 EAL: Heap on socket 0 was expanded by 4MB 00:26:11.044 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.044 EAL: request: mp_malloc_sync 00:26:11.044 EAL: No shared files mode enabled, IPC is disabled 00:26:11.044 EAL: Heap on socket 0 was shrunk by 4MB 00:26:11.044 EAL: Trying to obtain current memory policy. 00:26:11.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:11.044 EAL: Restoring previous memory policy: 4 00:26:11.044 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.044 EAL: request: mp_malloc_sync 00:26:11.044 EAL: No shared files mode enabled, IPC is disabled 00:26:11.044 EAL: Heap on socket 0 was expanded by 6MB 00:26:11.044 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.044 EAL: request: mp_malloc_sync 00:26:11.044 EAL: No shared files mode enabled, IPC is disabled 00:26:11.044 EAL: Heap on socket 0 was shrunk by 6MB 00:26:11.044 EAL: Trying to obtain current memory policy. 00:26:11.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:11.044 EAL: Restoring previous memory policy: 4 00:26:11.044 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.044 EAL: request: mp_malloc_sync 00:26:11.044 EAL: No shared files mode enabled, IPC is disabled 00:26:11.044 EAL: Heap on socket 0 was expanded by 10MB 00:26:11.044 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.044 EAL: request: mp_malloc_sync 00:26:11.044 EAL: No shared files mode enabled, IPC is disabled 00:26:11.044 EAL: Heap on socket 0 was shrunk by 10MB 00:26:11.044 EAL: Trying to obtain current memory policy. 00:26:11.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:11.044 EAL: Restoring previous memory policy: 4 00:26:11.044 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.044 EAL: request: mp_malloc_sync 00:26:11.044 EAL: No shared files mode enabled, IPC is disabled 00:26:11.044 EAL: Heap on socket 0 was expanded by 18MB 00:26:11.044 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.044 EAL: request: mp_malloc_sync 00:26:11.044 EAL: No shared files mode enabled, IPC is disabled 00:26:11.044 EAL: Heap on socket 0 was shrunk by 18MB 00:26:11.044 EAL: Trying to obtain current memory policy. 00:26:11.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:11.044 EAL: Restoring previous memory policy: 4 00:26:11.044 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.044 EAL: request: mp_malloc_sync 00:26:11.044 EAL: No shared files mode enabled, IPC is disabled 00:26:11.044 EAL: Heap on socket 0 was expanded by 34MB 00:26:11.044 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.044 EAL: request: mp_malloc_sync 00:26:11.044 EAL: No shared files mode enabled, IPC is disabled 00:26:11.044 EAL: Heap on socket 0 was shrunk by 34MB 00:26:11.044 EAL: Trying to obtain current memory policy. 00:26:11.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:11.302 EAL: Restoring previous memory policy: 4 00:26:11.302 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.302 EAL: request: mp_malloc_sync 00:26:11.302 EAL: No shared files mode enabled, IPC is disabled 00:26:11.302 EAL: Heap on socket 0 was expanded by 66MB 00:26:11.302 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.302 EAL: request: mp_malloc_sync 00:26:11.302 EAL: No shared files mode enabled, IPC is disabled 00:26:11.302 EAL: Heap on socket 0 was shrunk by 66MB 00:26:11.561 EAL: Trying to obtain current memory policy. 00:26:11.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:11.561 EAL: Restoring previous memory policy: 4 00:26:11.561 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.561 EAL: request: mp_malloc_sync 00:26:11.561 EAL: No shared files mode enabled, IPC is disabled 00:26:11.561 EAL: Heap on socket 0 was expanded by 130MB 00:26:11.820 EAL: Calling mem event callback 'spdk:(nil)' 00:26:11.820 EAL: request: mp_malloc_sync 00:26:11.820 EAL: No shared files mode enabled, IPC is disabled 00:26:11.820 EAL: Heap on socket 0 was shrunk by 130MB 00:26:12.078 EAL: Trying to obtain current memory policy. 00:26:12.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:12.078 EAL: Restoring previous memory policy: 4 00:26:12.078 EAL: Calling mem event callback 'spdk:(nil)' 00:26:12.078 EAL: request: mp_malloc_sync 00:26:12.078 EAL: No shared files mode enabled, IPC is disabled 00:26:12.078 EAL: Heap on socket 0 was expanded by 258MB 00:26:12.645 EAL: Calling mem event callback 'spdk:(nil)' 00:26:12.903 EAL: request: mp_malloc_sync 00:26:12.903 EAL: No shared files mode enabled, IPC is disabled 00:26:12.903 EAL: Heap on socket 0 was shrunk by 258MB 00:26:13.161 EAL: Trying to obtain current memory policy. 00:26:13.161 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:13.420 EAL: Restoring previous memory policy: 4 00:26:13.420 EAL: Calling mem event callback 'spdk:(nil)' 00:26:13.420 EAL: request: mp_malloc_sync 00:26:13.420 EAL: No shared files mode enabled, IPC is disabled 00:26:13.420 EAL: Heap on socket 0 was expanded by 514MB 00:26:14.797 EAL: Calling mem event callback 'spdk:(nil)' 00:26:14.797 EAL: request: mp_malloc_sync 00:26:14.797 EAL: No shared files mode enabled, IPC is disabled 00:26:14.797 EAL: Heap on socket 0 was shrunk by 514MB 00:26:15.735 EAL: Trying to obtain current memory policy. 00:26:15.735 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:15.993 EAL: Restoring previous memory policy: 4 00:26:15.993 EAL: Calling mem event callback 'spdk:(nil)' 00:26:15.993 EAL: request: mp_malloc_sync 00:26:15.993 EAL: No shared files mode enabled, IPC is disabled 00:26:15.993 EAL: Heap on socket 0 was expanded by 1026MB 00:26:18.525 EAL: Calling mem event callback 'spdk:(nil)' 00:26:18.525 EAL: request: mp_malloc_sync 00:26:18.525 EAL: No shared files mode enabled, IPC is disabled 00:26:18.525 EAL: Heap on socket 0 was shrunk by 1026MB 00:26:21.053 passed 00:26:21.053 00:26:21.053 Run Summary: Type Total Ran Passed Failed Inactive 00:26:21.053 suites 1 1 n/a 0 0 00:26:21.053 tests 2 2 2 0 0 00:26:21.053 asserts 5831 5831 5831 0 n/a 00:26:21.053 00:26:21.053 Elapsed time = 10.356 seconds 00:26:21.053 EAL: Calling mem event callback 'spdk:(nil)' 00:26:21.053 EAL: request: mp_malloc_sync 00:26:21.053 EAL: No shared files mode enabled, IPC is disabled 00:26:21.053 EAL: Heap on socket 0 was shrunk by 2MB 00:26:21.053 EAL: No shared files mode enabled, IPC is disabled 00:26:21.053 EAL: No shared files mode enabled, IPC is disabled 00:26:21.053 EAL: No shared files mode enabled, IPC is disabled 00:26:21.053 00:26:21.053 real 0m10.782s 00:26:21.053 user 0m9.060s 00:26:21.053 sys 0m1.536s 00:26:21.053 13:21:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.053 ************************************ 00:26:21.053 END TEST env_vtophys 00:26:21.053 ************************************ 00:26:21.053 13:21:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:26:21.053 13:21:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:26:21.053 13:21:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:21.053 13:21:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.053 13:21:13 env -- common/autotest_common.sh@10 -- # set +x 00:26:21.053 ************************************ 00:26:21.053 START TEST env_pci 00:26:21.053 ************************************ 00:26:21.053 13:21:13 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:26:21.053 00:26:21.053 00:26:21.053 CUnit - A unit testing framework for C - Version 2.1-3 00:26:21.053 http://cunit.sourceforge.net/ 00:26:21.053 00:26:21.053 00:26:21.053 Suite: pci 00:26:21.053 Test: pci_hook ...[2024-12-06 13:21:13.736980] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58023 has claimed it 00:26:21.053 EAL: Cannot find device (10000:00:01.0) 00:26:21.053 EAL: Failed to attach device on primary process 00:26:21.053 passed 00:26:21.053 00:26:21.053 Run Summary: Type Total Ran Passed Failed Inactive 00:26:21.053 suites 1 1 n/a 0 0 00:26:21.053 tests 1 1 1 0 0 00:26:21.053 asserts 25 25 25 0 n/a 00:26:21.053 00:26:21.053 Elapsed time = 0.015 seconds 00:26:21.053 00:26:21.053 real 0m0.110s 00:26:21.054 user 0m0.044s 00:26:21.054 sys 0m0.064s 00:26:21.054 13:21:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.054 13:21:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:26:21.054 ************************************ 00:26:21.054 END TEST env_pci 00:26:21.054 ************************************ 00:26:21.054 13:21:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:26:21.054 13:21:13 env -- env/env.sh@15 -- # uname 00:26:21.054 13:21:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:26:21.054 13:21:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:26:21.054 13:21:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:26:21.054 13:21:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:21.054 13:21:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.054 13:21:13 env -- common/autotest_common.sh@10 -- # set +x 00:26:21.054 ************************************ 00:26:21.054 START TEST env_dpdk_post_init 00:26:21.054 ************************************ 00:26:21.054 13:21:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:26:21.054 EAL: Detected CPU lcores: 10 00:26:21.054 EAL: Detected NUMA nodes: 1 00:26:21.054 EAL: Detected shared linkage of DPDK 00:26:21.054 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:26:21.054 EAL: Selected IOVA mode 'PA' 00:26:21.054 TELEMETRY: No legacy callbacks, legacy socket not created 00:26:21.054 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:26:21.054 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:26:21.311 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:26:21.311 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:26:21.311 Starting DPDK initialization... 00:26:21.311 Starting SPDK post initialization... 00:26:21.311 SPDK NVMe probe 00:26:21.311 Attaching to 0000:00:10.0 00:26:21.311 Attaching to 0000:00:11.0 00:26:21.311 Attaching to 0000:00:12.0 00:26:21.311 Attaching to 0000:00:13.0 00:26:21.311 Attached to 0000:00:10.0 00:26:21.311 Attached to 0000:00:11.0 00:26:21.311 Attached to 0000:00:13.0 00:26:21.311 Attached to 0000:00:12.0 00:26:21.311 Cleaning up... 00:26:21.311 00:26:21.311 real 0m0.355s 00:26:21.311 user 0m0.120s 00:26:21.311 sys 0m0.136s 00:26:21.311 ************************************ 00:26:21.311 END TEST env_dpdk_post_init 00:26:21.311 ************************************ 00:26:21.311 13:21:14 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.311 13:21:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:26:21.311 13:21:14 env -- env/env.sh@26 -- # uname 00:26:21.311 13:21:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:26:21.311 13:21:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:26:21.311 13:21:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:21.311 13:21:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.311 13:21:14 env -- common/autotest_common.sh@10 -- # set +x 00:26:21.311 ************************************ 00:26:21.311 START TEST env_mem_callbacks 00:26:21.311 ************************************ 00:26:21.311 13:21:14 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:26:21.311 EAL: Detected CPU lcores: 10 00:26:21.311 EAL: Detected NUMA nodes: 1 00:26:21.311 EAL: Detected shared linkage of DPDK 00:26:21.311 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:26:21.311 EAL: Selected IOVA mode 'PA' 00:26:21.569 TELEMETRY: No legacy callbacks, legacy socket not created 00:26:21.569 00:26:21.569 00:26:21.569 CUnit - A unit testing framework for C - Version 2.1-3 00:26:21.569 http://cunit.sourceforge.net/ 00:26:21.569 00:26:21.569 00:26:21.569 Suite: memory 00:26:21.569 Test: test ... 00:26:21.569 register 0x200000200000 2097152 00:26:21.569 malloc 3145728 00:26:21.569 register 0x200000400000 4194304 00:26:21.569 buf 0x2000004fffc0 len 3145728 PASSED 00:26:21.569 malloc 64 00:26:21.569 buf 0x2000004ffec0 len 64 PASSED 00:26:21.569 malloc 4194304 00:26:21.569 register 0x200000800000 6291456 00:26:21.569 buf 0x2000009fffc0 len 4194304 PASSED 00:26:21.569 free 0x2000004fffc0 3145728 00:26:21.569 free 0x2000004ffec0 64 00:26:21.569 unregister 0x200000400000 4194304 PASSED 00:26:21.569 free 0x2000009fffc0 4194304 00:26:21.569 unregister 0x200000800000 6291456 PASSED 00:26:21.569 malloc 8388608 00:26:21.569 register 0x200000400000 10485760 00:26:21.569 buf 0x2000005fffc0 len 8388608 PASSED 00:26:21.569 free 0x2000005fffc0 8388608 00:26:21.569 unregister 0x200000400000 10485760 PASSED 00:26:21.569 passed 00:26:21.569 00:26:21.569 Run Summary: Type Total Ran Passed Failed Inactive 00:26:21.569 suites 1 1 n/a 0 0 00:26:21.569 tests 1 1 1 0 0 00:26:21.569 asserts 15 15 15 0 n/a 00:26:21.569 00:26:21.569 Elapsed time = 0.081 seconds 00:26:21.569 00:26:21.569 real 0m0.324s 00:26:21.569 user 0m0.124s 00:26:21.569 sys 0m0.094s 00:26:21.569 ************************************ 00:26:21.569 END TEST env_mem_callbacks 00:26:21.569 ************************************ 00:26:21.569 13:21:14 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.569 13:21:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:26:21.569 ************************************ 00:26:21.569 END TEST env 00:26:21.569 ************************************ 00:26:21.569 00:26:21.569 real 0m12.505s 00:26:21.569 user 0m9.937s 00:26:21.569 sys 0m2.163s 00:26:21.569 13:21:14 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.569 13:21:14 env -- common/autotest_common.sh@10 -- # set +x 00:26:21.826 13:21:14 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:26:21.826 13:21:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:21.826 13:21:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.826 13:21:14 -- common/autotest_common.sh@10 -- # set +x 00:26:21.826 ************************************ 00:26:21.826 START TEST rpc 00:26:21.826 ************************************ 00:26:21.826 13:21:14 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:26:21.826 * Looking for test storage... 00:26:21.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:26:21.826 13:21:14 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:21.826 13:21:14 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:21.826 13:21:14 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:22.084 13:21:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.084 13:21:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.084 13:21:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.084 13:21:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.084 13:21:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.084 13:21:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.084 13:21:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.084 13:21:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.084 13:21:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.084 13:21:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.084 13:21:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.084 13:21:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:22.084 13:21:14 rpc -- scripts/common.sh@345 -- # : 1 00:26:22.084 13:21:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.084 13:21:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.084 13:21:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:26:22.084 13:21:14 rpc -- scripts/common.sh@353 -- # local d=1 00:26:22.084 13:21:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.084 13:21:14 rpc -- scripts/common.sh@355 -- # echo 1 00:26:22.084 13:21:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.084 13:21:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:26:22.084 13:21:14 rpc -- scripts/common.sh@353 -- # local d=2 00:26:22.084 13:21:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.084 13:21:14 rpc -- scripts/common.sh@355 -- # echo 2 00:26:22.084 13:21:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.084 13:21:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.084 13:21:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.084 13:21:14 rpc -- scripts/common.sh@368 -- # return 0 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.084 --rc genhtml_branch_coverage=1 00:26:22.084 --rc genhtml_function_coverage=1 00:26:22.084 --rc genhtml_legend=1 00:26:22.084 --rc geninfo_all_blocks=1 00:26:22.084 --rc geninfo_unexecuted_blocks=1 00:26:22.084 00:26:22.084 ' 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.084 --rc genhtml_branch_coverage=1 00:26:22.084 --rc genhtml_function_coverage=1 00:26:22.084 --rc genhtml_legend=1 00:26:22.084 --rc geninfo_all_blocks=1 00:26:22.084 --rc geninfo_unexecuted_blocks=1 00:26:22.084 00:26:22.084 ' 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.084 --rc genhtml_branch_coverage=1 00:26:22.084 --rc genhtml_function_coverage=1 00:26:22.084 --rc genhtml_legend=1 00:26:22.084 --rc geninfo_all_blocks=1 00:26:22.084 --rc geninfo_unexecuted_blocks=1 00:26:22.084 00:26:22.084 ' 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:22.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.084 --rc genhtml_branch_coverage=1 00:26:22.084 --rc genhtml_function_coverage=1 00:26:22.084 --rc genhtml_legend=1 00:26:22.084 --rc geninfo_all_blocks=1 00:26:22.084 --rc geninfo_unexecuted_blocks=1 00:26:22.084 00:26:22.084 ' 00:26:22.084 13:21:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58150 00:26:22.084 13:21:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:22.084 13:21:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:26:22.084 13:21:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58150 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@835 -- # '[' -z 58150 ']' 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:22.084 13:21:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:22.084 [2024-12-06 13:21:15.113760] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:26:22.084 [2024-12-06 13:21:15.114240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58150 ] 00:26:22.341 [2024-12-06 13:21:15.313983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.597 [2024-12-06 13:21:15.467911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:26:22.597 [2024-12-06 13:21:15.468213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58150' to capture a snapshot of events at runtime. 00:26:22.597 [2024-12-06 13:21:15.468369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.597 [2024-12-06 13:21:15.468566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.597 [2024-12-06 13:21:15.468610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58150 for offline analysis/debug. 00:26:22.597 [2024-12-06 13:21:15.470151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.971 13:21:16 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.971 13:21:16 rpc -- common/autotest_common.sh@868 -- # return 0 00:26:23.971 13:21:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:26:23.971 13:21:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:26:23.971 13:21:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:26:23.971 13:21:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:26:23.971 13:21:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:23.971 13:21:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:23.971 13:21:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:23.971 ************************************ 00:26:23.971 START TEST rpc_integrity 00:26:23.971 ************************************ 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:26:23.971 { 00:26:23.971 "name": "Malloc0", 00:26:23.971 "aliases": [ 00:26:23.971 "81b045b6-868b-4720-b8af-35c260403995" 00:26:23.971 ], 00:26:23.971 "product_name": "Malloc disk", 00:26:23.971 "block_size": 512, 00:26:23.971 "num_blocks": 16384, 00:26:23.971 "uuid": "81b045b6-868b-4720-b8af-35c260403995", 00:26:23.971 "assigned_rate_limits": { 00:26:23.971 "rw_ios_per_sec": 0, 00:26:23.971 "rw_mbytes_per_sec": 0, 00:26:23.971 "r_mbytes_per_sec": 0, 00:26:23.971 "w_mbytes_per_sec": 0 00:26:23.971 }, 00:26:23.971 "claimed": false, 00:26:23.971 "zoned": false, 00:26:23.971 "supported_io_types": { 00:26:23.971 "read": true, 00:26:23.971 "write": true, 00:26:23.971 "unmap": true, 00:26:23.971 "flush": true, 00:26:23.971 "reset": true, 00:26:23.971 "nvme_admin": false, 00:26:23.971 "nvme_io": false, 00:26:23.971 "nvme_io_md": false, 00:26:23.971 "write_zeroes": true, 00:26:23.971 "zcopy": true, 00:26:23.971 "get_zone_info": false, 00:26:23.971 "zone_management": false, 00:26:23.971 "zone_append": false, 00:26:23.971 "compare": false, 00:26:23.971 "compare_and_write": false, 00:26:23.971 "abort": true, 00:26:23.971 "seek_hole": false, 00:26:23.971 "seek_data": false, 00:26:23.971 "copy": true, 00:26:23.971 "nvme_iov_md": false 00:26:23.971 }, 00:26:23.971 "memory_domains": [ 00:26:23.971 { 00:26:23.971 "dma_device_id": "system", 00:26:23.971 "dma_device_type": 1 00:26:23.971 }, 00:26:23.971 { 00:26:23.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.971 "dma_device_type": 2 00:26:23.971 } 00:26:23.971 ], 00:26:23.971 "driver_specific": {} 00:26:23.971 } 00:26:23.971 ]' 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:23.971 [2024-12-06 13:21:16.809695] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:26:23.971 [2024-12-06 13:21:16.809804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.971 [2024-12-06 13:21:16.809845] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:23.971 [2024-12-06 13:21:16.809865] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.971 [2024-12-06 13:21:16.813364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.971 Passthru0 00:26:23.971 [2024-12-06 13:21:16.813569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:23.971 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.971 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:26:23.971 { 00:26:23.971 "name": "Malloc0", 00:26:23.971 "aliases": [ 00:26:23.971 "81b045b6-868b-4720-b8af-35c260403995" 00:26:23.971 ], 00:26:23.971 "product_name": "Malloc disk", 00:26:23.971 "block_size": 512, 00:26:23.971 "num_blocks": 16384, 00:26:23.971 "uuid": "81b045b6-868b-4720-b8af-35c260403995", 00:26:23.971 "assigned_rate_limits": { 00:26:23.971 "rw_ios_per_sec": 0, 00:26:23.971 "rw_mbytes_per_sec": 0, 00:26:23.972 "r_mbytes_per_sec": 0, 00:26:23.972 "w_mbytes_per_sec": 0 00:26:23.972 }, 00:26:23.972 "claimed": true, 00:26:23.972 "claim_type": "exclusive_write", 00:26:23.972 "zoned": false, 00:26:23.972 "supported_io_types": { 00:26:23.972 "read": true, 00:26:23.972 "write": true, 00:26:23.972 "unmap": true, 00:26:23.972 "flush": true, 00:26:23.972 "reset": true, 00:26:23.972 "nvme_admin": false, 00:26:23.972 "nvme_io": false, 00:26:23.972 "nvme_io_md": false, 00:26:23.972 "write_zeroes": true, 00:26:23.972 "zcopy": true, 00:26:23.972 "get_zone_info": false, 00:26:23.972 "zone_management": false, 00:26:23.972 "zone_append": false, 00:26:23.972 "compare": false, 00:26:23.972 "compare_and_write": false, 00:26:23.972 "abort": true, 00:26:23.972 "seek_hole": false, 00:26:23.972 "seek_data": false, 00:26:23.972 "copy": true, 00:26:23.972 "nvme_iov_md": false 00:26:23.972 }, 00:26:23.972 "memory_domains": [ 00:26:23.972 { 00:26:23.972 "dma_device_id": "system", 00:26:23.972 "dma_device_type": 1 00:26:23.972 }, 00:26:23.972 { 00:26:23.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.972 "dma_device_type": 2 00:26:23.972 } 00:26:23.972 ], 00:26:23.972 "driver_specific": {} 00:26:23.972 }, 00:26:23.972 { 00:26:23.972 "name": "Passthru0", 00:26:23.972 "aliases": [ 00:26:23.972 "74803b08-8197-52b4-912f-3f9529f479f3" 00:26:23.972 ], 00:26:23.972 "product_name": "passthru", 00:26:23.972 "block_size": 512, 00:26:23.972 "num_blocks": 16384, 00:26:23.972 "uuid": "74803b08-8197-52b4-912f-3f9529f479f3", 00:26:23.972 "assigned_rate_limits": { 00:26:23.972 "rw_ios_per_sec": 0, 00:26:23.972 "rw_mbytes_per_sec": 0, 00:26:23.972 "r_mbytes_per_sec": 0, 00:26:23.972 "w_mbytes_per_sec": 0 00:26:23.972 }, 00:26:23.972 "claimed": false, 00:26:23.972 "zoned": false, 00:26:23.972 "supported_io_types": { 00:26:23.972 "read": true, 00:26:23.972 "write": true, 00:26:23.972 "unmap": true, 00:26:23.972 "flush": true, 00:26:23.972 "reset": true, 00:26:23.972 "nvme_admin": false, 00:26:23.972 "nvme_io": false, 00:26:23.972 "nvme_io_md": false, 00:26:23.972 "write_zeroes": true, 00:26:23.972 "zcopy": true, 00:26:23.972 "get_zone_info": false, 00:26:23.972 "zone_management": false, 00:26:23.972 "zone_append": false, 00:26:23.972 "compare": false, 00:26:23.972 "compare_and_write": false, 00:26:23.972 "abort": true, 00:26:23.972 "seek_hole": false, 00:26:23.972 "seek_data": false, 00:26:23.972 "copy": true, 00:26:23.972 "nvme_iov_md": false 00:26:23.972 }, 00:26:23.972 "memory_domains": [ 00:26:23.972 { 00:26:23.972 "dma_device_id": "system", 00:26:23.972 "dma_device_type": 1 00:26:23.972 }, 00:26:23.972 { 00:26:23.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.972 "dma_device_type": 2 00:26:23.972 } 00:26:23.972 ], 00:26:23.972 "driver_specific": { 00:26:23.972 "passthru": { 00:26:23.972 "name": "Passthru0", 00:26:23.972 "base_bdev_name": "Malloc0" 00:26:23.972 } 00:26:23.972 } 00:26:23.972 } 00:26:23.972 ]' 00:26:23.972 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:26:23.972 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:26:23.972 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:26:23.972 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.972 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:23.972 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.972 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:23.972 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.972 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:23.972 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.972 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:23.972 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.972 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:23.972 13:21:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.972 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:26:23.972 13:21:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:26:23.972 ************************************ 00:26:23.972 END TEST rpc_integrity 00:26:23.972 ************************************ 00:26:23.972 13:21:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:26:23.972 00:26:23.972 real 0m0.388s 00:26:23.972 user 0m0.221s 00:26:23.972 sys 0m0.054s 00:26:23.972 13:21:17 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:23.972 13:21:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:24.230 13:21:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:26:24.230 13:21:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:24.230 13:21:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.230 13:21:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:24.230 ************************************ 00:26:24.230 START TEST rpc_plugins 00:26:24.230 ************************************ 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:26:24.230 13:21:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.230 13:21:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:26:24.230 13:21:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.230 13:21:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:26:24.230 { 00:26:24.230 "name": "Malloc1", 00:26:24.230 "aliases": [ 00:26:24.230 "5069a7c9-a5f0-4a3d-803a-a5244ae0c3c7" 00:26:24.230 ], 00:26:24.230 "product_name": "Malloc disk", 00:26:24.230 "block_size": 4096, 00:26:24.230 "num_blocks": 256, 00:26:24.230 "uuid": "5069a7c9-a5f0-4a3d-803a-a5244ae0c3c7", 00:26:24.230 "assigned_rate_limits": { 00:26:24.230 "rw_ios_per_sec": 0, 00:26:24.230 "rw_mbytes_per_sec": 0, 00:26:24.230 "r_mbytes_per_sec": 0, 00:26:24.230 "w_mbytes_per_sec": 0 00:26:24.230 }, 00:26:24.230 "claimed": false, 00:26:24.230 "zoned": false, 00:26:24.230 "supported_io_types": { 00:26:24.230 "read": true, 00:26:24.230 "write": true, 00:26:24.230 "unmap": true, 00:26:24.230 "flush": true, 00:26:24.230 "reset": true, 00:26:24.230 "nvme_admin": false, 00:26:24.230 "nvme_io": false, 00:26:24.230 "nvme_io_md": false, 00:26:24.230 "write_zeroes": true, 00:26:24.230 "zcopy": true, 00:26:24.230 "get_zone_info": false, 00:26:24.230 "zone_management": false, 00:26:24.230 "zone_append": false, 00:26:24.230 "compare": false, 00:26:24.230 "compare_and_write": false, 00:26:24.230 "abort": true, 00:26:24.230 "seek_hole": false, 00:26:24.230 "seek_data": false, 00:26:24.230 "copy": true, 00:26:24.230 "nvme_iov_md": false 00:26:24.230 }, 00:26:24.230 "memory_domains": [ 00:26:24.230 { 00:26:24.230 "dma_device_id": "system", 00:26:24.230 "dma_device_type": 1 00:26:24.230 }, 00:26:24.230 { 00:26:24.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.230 "dma_device_type": 2 00:26:24.230 } 00:26:24.230 ], 00:26:24.230 "driver_specific": {} 00:26:24.230 } 00:26:24.230 ]' 00:26:24.230 13:21:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:26:24.230 13:21:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:26:24.230 13:21:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.230 13:21:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.230 13:21:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:26:24.230 13:21:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:26:24.230 ************************************ 00:26:24.230 END TEST rpc_plugins 00:26:24.230 ************************************ 00:26:24.230 13:21:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:26:24.230 00:26:24.230 real 0m0.174s 00:26:24.230 user 0m0.097s 00:26:24.230 sys 0m0.028s 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.230 13:21:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:24.230 13:21:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:26:24.230 13:21:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:24.230 13:21:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.230 13:21:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:24.231 ************************************ 00:26:24.231 START TEST rpc_trace_cmd_test 00:26:24.231 ************************************ 00:26:24.231 13:21:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:26:24.231 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:26:24.231 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:26:24.231 13:21:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.231 13:21:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:26:24.489 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58150", 00:26:24.489 "tpoint_group_mask": "0x8", 00:26:24.489 "iscsi_conn": { 00:26:24.489 "mask": "0x2", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "scsi": { 00:26:24.489 "mask": "0x4", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "bdev": { 00:26:24.489 "mask": "0x8", 00:26:24.489 "tpoint_mask": "0xffffffffffffffff" 00:26:24.489 }, 00:26:24.489 "nvmf_rdma": { 00:26:24.489 "mask": "0x10", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "nvmf_tcp": { 00:26:24.489 "mask": "0x20", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "ftl": { 00:26:24.489 "mask": "0x40", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "blobfs": { 00:26:24.489 "mask": "0x80", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "dsa": { 00:26:24.489 "mask": "0x200", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "thread": { 00:26:24.489 "mask": "0x400", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "nvme_pcie": { 00:26:24.489 "mask": "0x800", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "iaa": { 00:26:24.489 "mask": "0x1000", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "nvme_tcp": { 00:26:24.489 "mask": "0x2000", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "bdev_nvme": { 00:26:24.489 "mask": "0x4000", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "sock": { 00:26:24.489 "mask": "0x8000", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "blob": { 00:26:24.489 "mask": "0x10000", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "bdev_raid": { 00:26:24.489 "mask": "0x20000", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 }, 00:26:24.489 "scheduler": { 00:26:24.489 "mask": "0x40000", 00:26:24.489 "tpoint_mask": "0x0" 00:26:24.489 } 00:26:24.489 }' 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:26:24.489 ************************************ 00:26:24.489 END TEST rpc_trace_cmd_test 00:26:24.489 ************************************ 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:26:24.489 00:26:24.489 real 0m0.251s 00:26:24.489 user 0m0.193s 00:26:24.489 sys 0m0.050s 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.489 13:21:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.748 13:21:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:26:24.748 13:21:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:26:24.748 13:21:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:26:24.748 13:21:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:24.748 13:21:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.748 13:21:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:24.748 ************************************ 00:26:24.748 START TEST rpc_daemon_integrity 00:26:24.748 ************************************ 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:26:24.748 { 00:26:24.748 "name": "Malloc2", 00:26:24.748 "aliases": [ 00:26:24.748 "7f3ba1f6-50d8-42cb-a439-6b931381bf38" 00:26:24.748 ], 00:26:24.748 "product_name": "Malloc disk", 00:26:24.748 "block_size": 512, 00:26:24.748 "num_blocks": 16384, 00:26:24.748 "uuid": "7f3ba1f6-50d8-42cb-a439-6b931381bf38", 00:26:24.748 "assigned_rate_limits": { 00:26:24.748 "rw_ios_per_sec": 0, 00:26:24.748 "rw_mbytes_per_sec": 0, 00:26:24.748 "r_mbytes_per_sec": 0, 00:26:24.748 "w_mbytes_per_sec": 0 00:26:24.748 }, 00:26:24.748 "claimed": false, 00:26:24.748 "zoned": false, 00:26:24.748 "supported_io_types": { 00:26:24.748 "read": true, 00:26:24.748 "write": true, 00:26:24.748 "unmap": true, 00:26:24.748 "flush": true, 00:26:24.748 "reset": true, 00:26:24.748 "nvme_admin": false, 00:26:24.748 "nvme_io": false, 00:26:24.748 "nvme_io_md": false, 00:26:24.748 "write_zeroes": true, 00:26:24.748 "zcopy": true, 00:26:24.748 "get_zone_info": false, 00:26:24.748 "zone_management": false, 00:26:24.748 "zone_append": false, 00:26:24.748 "compare": false, 00:26:24.748 "compare_and_write": false, 00:26:24.748 "abort": true, 00:26:24.748 "seek_hole": false, 00:26:24.748 "seek_data": false, 00:26:24.748 "copy": true, 00:26:24.748 "nvme_iov_md": false 00:26:24.748 }, 00:26:24.748 "memory_domains": [ 00:26:24.748 { 00:26:24.748 "dma_device_id": "system", 00:26:24.748 "dma_device_type": 1 00:26:24.748 }, 00:26:24.748 { 00:26:24.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.748 "dma_device_type": 2 00:26:24.748 } 00:26:24.748 ], 00:26:24.748 "driver_specific": {} 00:26:24.748 } 00:26:24.748 ]' 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:24.748 [2024-12-06 13:21:17.793929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:26:24.748 [2024-12-06 13:21:17.794032] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.748 [2024-12-06 13:21:17.794067] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:24.748 [2024-12-06 13:21:17.794086] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.748 [2024-12-06 13:21:17.797648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.748 [2024-12-06 13:21:17.797719] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:26:24.748 Passthru0 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.748 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:26:24.748 { 00:26:24.748 "name": "Malloc2", 00:26:24.748 "aliases": [ 00:26:24.748 "7f3ba1f6-50d8-42cb-a439-6b931381bf38" 00:26:24.748 ], 00:26:24.748 "product_name": "Malloc disk", 00:26:24.748 "block_size": 512, 00:26:24.748 "num_blocks": 16384, 00:26:24.748 "uuid": "7f3ba1f6-50d8-42cb-a439-6b931381bf38", 00:26:24.748 "assigned_rate_limits": { 00:26:24.748 "rw_ios_per_sec": 0, 00:26:24.748 "rw_mbytes_per_sec": 0, 00:26:24.748 "r_mbytes_per_sec": 0, 00:26:24.748 "w_mbytes_per_sec": 0 00:26:24.748 }, 00:26:24.748 "claimed": true, 00:26:24.748 "claim_type": "exclusive_write", 00:26:24.748 "zoned": false, 00:26:24.748 "supported_io_types": { 00:26:24.748 "read": true, 00:26:24.748 "write": true, 00:26:24.749 "unmap": true, 00:26:24.749 "flush": true, 00:26:24.749 "reset": true, 00:26:24.749 "nvme_admin": false, 00:26:24.749 "nvme_io": false, 00:26:24.749 "nvme_io_md": false, 00:26:24.749 "write_zeroes": true, 00:26:24.749 "zcopy": true, 00:26:24.749 "get_zone_info": false, 00:26:24.749 "zone_management": false, 00:26:24.749 "zone_append": false, 00:26:24.749 "compare": false, 00:26:24.749 "compare_and_write": false, 00:26:24.749 "abort": true, 00:26:24.749 "seek_hole": false, 00:26:24.749 "seek_data": false, 00:26:24.749 "copy": true, 00:26:24.749 "nvme_iov_md": false 00:26:24.749 }, 00:26:24.749 "memory_domains": [ 00:26:24.749 { 00:26:24.749 "dma_device_id": "system", 00:26:24.749 "dma_device_type": 1 00:26:24.749 }, 00:26:24.749 { 00:26:24.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.749 "dma_device_type": 2 00:26:24.749 } 00:26:24.749 ], 00:26:24.749 "driver_specific": {} 00:26:24.749 }, 00:26:24.749 { 00:26:24.749 "name": "Passthru0", 00:26:24.749 "aliases": [ 00:26:24.749 "ace1ec78-6049-59f2-912a-3a98cb3af8b3" 00:26:24.749 ], 00:26:24.749 "product_name": "passthru", 00:26:24.749 "block_size": 512, 00:26:24.749 "num_blocks": 16384, 00:26:24.749 "uuid": "ace1ec78-6049-59f2-912a-3a98cb3af8b3", 00:26:24.749 "assigned_rate_limits": { 00:26:24.749 "rw_ios_per_sec": 0, 00:26:24.749 "rw_mbytes_per_sec": 0, 00:26:24.749 "r_mbytes_per_sec": 0, 00:26:24.749 "w_mbytes_per_sec": 0 00:26:24.749 }, 00:26:24.749 "claimed": false, 00:26:24.749 "zoned": false, 00:26:24.749 "supported_io_types": { 00:26:24.749 "read": true, 00:26:24.749 "write": true, 00:26:24.749 "unmap": true, 00:26:24.749 "flush": true, 00:26:24.749 "reset": true, 00:26:24.749 "nvme_admin": false, 00:26:24.749 "nvme_io": false, 00:26:24.749 "nvme_io_md": false, 00:26:24.749 "write_zeroes": true, 00:26:24.749 "zcopy": true, 00:26:24.749 "get_zone_info": false, 00:26:24.749 "zone_management": false, 00:26:24.749 "zone_append": false, 00:26:24.749 "compare": false, 00:26:24.749 "compare_and_write": false, 00:26:24.749 "abort": true, 00:26:24.749 "seek_hole": false, 00:26:24.749 "seek_data": false, 00:26:24.749 "copy": true, 00:26:24.749 "nvme_iov_md": false 00:26:24.749 }, 00:26:24.749 "memory_domains": [ 00:26:24.749 { 00:26:24.749 "dma_device_id": "system", 00:26:24.749 "dma_device_type": 1 00:26:24.749 }, 00:26:24.749 { 00:26:24.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.749 "dma_device_type": 2 00:26:24.749 } 00:26:24.749 ], 00:26:24.749 "driver_specific": { 00:26:24.749 "passthru": { 00:26:24.749 "name": "Passthru0", 00:26:24.749 "base_bdev_name": "Malloc2" 00:26:24.749 } 00:26:24.749 } 00:26:24.749 } 00:26:24.749 ]' 00:26:24.749 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:26:25.008 13:21:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:26:25.008 ************************************ 00:26:25.008 END TEST rpc_daemon_integrity 00:26:25.008 ************************************ 00:26:25.008 13:21:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:26:25.008 00:26:25.008 real 0m0.375s 00:26:25.008 user 0m0.206s 00:26:25.008 sys 0m0.053s 00:26:25.008 13:21:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.008 13:21:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:25.008 13:21:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:25.008 13:21:18 rpc -- rpc/rpc.sh@84 -- # killprocess 58150 00:26:25.008 13:21:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 58150 ']' 00:26:25.008 13:21:18 rpc -- common/autotest_common.sh@958 -- # kill -0 58150 00:26:25.008 13:21:18 rpc -- common/autotest_common.sh@959 -- # uname 00:26:25.008 13:21:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.008 13:21:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58150 00:26:25.267 killing process with pid 58150 00:26:25.267 13:21:18 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:25.267 13:21:18 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:25.267 13:21:18 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58150' 00:26:25.267 13:21:18 rpc -- common/autotest_common.sh@973 -- # kill 58150 00:26:25.267 13:21:18 rpc -- common/autotest_common.sh@978 -- # wait 58150 00:26:28.599 00:26:28.599 real 0m6.404s 00:26:28.599 user 0m6.807s 00:26:28.599 sys 0m1.236s 00:26:28.599 13:21:21 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.599 ************************************ 00:26:28.599 END TEST rpc 00:26:28.599 ************************************ 00:26:28.599 13:21:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:28.599 13:21:21 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:26:28.599 13:21:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:28.599 13:21:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.599 13:21:21 -- common/autotest_common.sh@10 -- # set +x 00:26:28.599 ************************************ 00:26:28.599 START TEST skip_rpc 00:26:28.599 ************************************ 00:26:28.599 13:21:21 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:26:28.599 * Looking for test storage... 00:26:28.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:26:28.599 13:21:21 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:28.599 13:21:21 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:28.599 13:21:21 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:28.599 13:21:21 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@345 -- # : 1 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:26:28.599 13:21:21 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.600 13:21:21 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:26:28.600 13:21:21 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.600 13:21:21 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.600 13:21:21 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.600 13:21:21 skip_rpc -- scripts/common.sh@368 -- # return 0 00:26:28.600 13:21:21 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.600 13:21:21 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:28.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.600 --rc genhtml_branch_coverage=1 00:26:28.600 --rc genhtml_function_coverage=1 00:26:28.600 --rc genhtml_legend=1 00:26:28.600 --rc geninfo_all_blocks=1 00:26:28.600 --rc geninfo_unexecuted_blocks=1 00:26:28.600 00:26:28.600 ' 00:26:28.600 13:21:21 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:28.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.600 --rc genhtml_branch_coverage=1 00:26:28.600 --rc genhtml_function_coverage=1 00:26:28.600 --rc genhtml_legend=1 00:26:28.600 --rc geninfo_all_blocks=1 00:26:28.600 --rc geninfo_unexecuted_blocks=1 00:26:28.600 00:26:28.600 ' 00:26:28.600 13:21:21 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:28.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.600 --rc genhtml_branch_coverage=1 00:26:28.600 --rc genhtml_function_coverage=1 00:26:28.600 --rc genhtml_legend=1 00:26:28.600 --rc geninfo_all_blocks=1 00:26:28.600 --rc geninfo_unexecuted_blocks=1 00:26:28.600 00:26:28.600 ' 00:26:28.600 13:21:21 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:28.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.600 --rc genhtml_branch_coverage=1 00:26:28.600 --rc genhtml_function_coverage=1 00:26:28.600 --rc genhtml_legend=1 00:26:28.600 --rc geninfo_all_blocks=1 00:26:28.600 --rc geninfo_unexecuted_blocks=1 00:26:28.600 00:26:28.600 ' 00:26:28.600 13:21:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:28.600 13:21:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:28.600 13:21:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:26:28.600 13:21:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:28.600 13:21:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.600 13:21:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:28.600 ************************************ 00:26:28.600 START TEST skip_rpc 00:26:28.600 ************************************ 00:26:28.600 13:21:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:26:28.600 13:21:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58394 00:26:28.600 13:21:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:26:28.600 13:21:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:28.600 13:21:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:26:28.600 [2024-12-06 13:21:21.584499] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:26:28.600 [2024-12-06 13:21:21.584752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58394 ] 00:26:28.859 [2024-12-06 13:21:21.777371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.117 [2024-12-06 13:21:22.006267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58394 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58394 ']' 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58394 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58394 00:26:34.400 killing process with pid 58394 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58394' 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58394 00:26:34.400 13:21:26 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58394 00:26:36.928 00:26:36.928 real 0m8.132s 00:26:36.928 user 0m7.377s 00:26:36.928 sys 0m0.652s 00:26:36.928 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:36.928 ************************************ 00:26:36.928 END TEST skip_rpc 00:26:36.928 ************************************ 00:26:36.928 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:36.928 13:21:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:26:36.928 13:21:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:36.928 13:21:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:36.928 13:21:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:36.928 ************************************ 00:26:36.928 START TEST skip_rpc_with_json 00:26:36.928 ************************************ 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:26:36.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58505 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58505 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58505 ']' 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.928 13:21:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:36.928 [2024-12-06 13:21:29.802962] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:26:36.928 [2024-12-06 13:21:29.803441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58505 ] 00:26:36.928 [2024-12-06 13:21:29.999559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.187 [2024-12-06 13:21:30.171935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.614 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.614 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:26:38.614 13:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:26:38.614 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.614 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:38.614 [2024-12-06 13:21:31.419762] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:26:38.614 request: 00:26:38.614 { 00:26:38.614 "trtype": "tcp", 00:26:38.614 "method": "nvmf_get_transports", 00:26:38.614 "req_id": 1 00:26:38.614 } 00:26:38.614 Got JSON-RPC error response 00:26:38.614 response: 00:26:38.614 { 00:26:38.614 "code": -19, 00:26:38.614 "message": "No such device" 00:26:38.614 } 00:26:38.614 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:38.615 13:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:26:38.615 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.615 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:38.615 [2024-12-06 13:21:31.431929] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.615 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.615 13:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:26:38.615 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.615 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:38.615 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.615 13:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:38.615 { 00:26:38.615 "subsystems": [ 00:26:38.615 { 00:26:38.615 "subsystem": "fsdev", 00:26:38.615 "config": [ 00:26:38.615 { 00:26:38.615 "method": "fsdev_set_opts", 00:26:38.615 "params": { 00:26:38.615 "fsdev_io_pool_size": 65535, 00:26:38.615 "fsdev_io_cache_size": 256 00:26:38.615 } 00:26:38.615 } 00:26:38.615 ] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "keyring", 00:26:38.615 "config": [] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "iobuf", 00:26:38.615 "config": [ 00:26:38.615 { 00:26:38.615 "method": "iobuf_set_options", 00:26:38.615 "params": { 00:26:38.615 "small_pool_count": 8192, 00:26:38.615 "large_pool_count": 1024, 00:26:38.615 "small_bufsize": 8192, 00:26:38.615 "large_bufsize": 135168, 00:26:38.615 "enable_numa": false 00:26:38.615 } 00:26:38.615 } 00:26:38.615 ] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "sock", 00:26:38.615 "config": [ 00:26:38.615 { 00:26:38.615 "method": "sock_set_default_impl", 00:26:38.615 "params": { 00:26:38.615 "impl_name": "posix" 00:26:38.615 } 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "method": "sock_impl_set_options", 00:26:38.615 "params": { 00:26:38.615 "impl_name": "ssl", 00:26:38.615 "recv_buf_size": 4096, 00:26:38.615 "send_buf_size": 4096, 00:26:38.615 "enable_recv_pipe": true, 00:26:38.615 "enable_quickack": false, 00:26:38.615 "enable_placement_id": 0, 00:26:38.615 "enable_zerocopy_send_server": true, 00:26:38.615 "enable_zerocopy_send_client": false, 00:26:38.615 "zerocopy_threshold": 0, 00:26:38.615 "tls_version": 0, 00:26:38.615 "enable_ktls": false 00:26:38.615 } 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "method": "sock_impl_set_options", 00:26:38.615 "params": { 00:26:38.615 "impl_name": "posix", 00:26:38.615 "recv_buf_size": 2097152, 00:26:38.615 "send_buf_size": 2097152, 00:26:38.615 "enable_recv_pipe": true, 00:26:38.615 "enable_quickack": false, 00:26:38.615 "enable_placement_id": 0, 00:26:38.615 "enable_zerocopy_send_server": true, 00:26:38.615 "enable_zerocopy_send_client": false, 00:26:38.615 "zerocopy_threshold": 0, 00:26:38.615 "tls_version": 0, 00:26:38.615 "enable_ktls": false 00:26:38.615 } 00:26:38.615 } 00:26:38.615 ] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "vmd", 00:26:38.615 "config": [] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "accel", 00:26:38.615 "config": [ 00:26:38.615 { 00:26:38.615 "method": "accel_set_options", 00:26:38.615 "params": { 00:26:38.615 "small_cache_size": 128, 00:26:38.615 "large_cache_size": 16, 00:26:38.615 "task_count": 2048, 00:26:38.615 "sequence_count": 2048, 00:26:38.615 "buf_count": 2048 00:26:38.615 } 00:26:38.615 } 00:26:38.615 ] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "bdev", 00:26:38.615 "config": [ 00:26:38.615 { 00:26:38.615 "method": "bdev_set_options", 00:26:38.615 "params": { 00:26:38.615 "bdev_io_pool_size": 65535, 00:26:38.615 "bdev_io_cache_size": 256, 00:26:38.615 "bdev_auto_examine": true, 00:26:38.615 "iobuf_small_cache_size": 128, 00:26:38.615 "iobuf_large_cache_size": 16 00:26:38.615 } 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "method": "bdev_raid_set_options", 00:26:38.615 "params": { 00:26:38.615 "process_window_size_kb": 1024, 00:26:38.615 "process_max_bandwidth_mb_sec": 0 00:26:38.615 } 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "method": "bdev_iscsi_set_options", 00:26:38.615 "params": { 00:26:38.615 "timeout_sec": 30 00:26:38.615 } 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "method": "bdev_nvme_set_options", 00:26:38.615 "params": { 00:26:38.615 "action_on_timeout": "none", 00:26:38.615 "timeout_us": 0, 00:26:38.615 "timeout_admin_us": 0, 00:26:38.615 "keep_alive_timeout_ms": 10000, 00:26:38.615 "arbitration_burst": 0, 00:26:38.615 "low_priority_weight": 0, 00:26:38.615 "medium_priority_weight": 0, 00:26:38.615 "high_priority_weight": 0, 00:26:38.615 "nvme_adminq_poll_period_us": 10000, 00:26:38.615 "nvme_ioq_poll_period_us": 0, 00:26:38.615 "io_queue_requests": 0, 00:26:38.615 "delay_cmd_submit": true, 00:26:38.615 "transport_retry_count": 4, 00:26:38.615 "bdev_retry_count": 3, 00:26:38.615 "transport_ack_timeout": 0, 00:26:38.615 "ctrlr_loss_timeout_sec": 0, 00:26:38.615 "reconnect_delay_sec": 0, 00:26:38.615 "fast_io_fail_timeout_sec": 0, 00:26:38.615 "disable_auto_failback": false, 00:26:38.615 "generate_uuids": false, 00:26:38.615 "transport_tos": 0, 00:26:38.615 "nvme_error_stat": false, 00:26:38.615 "rdma_srq_size": 0, 00:26:38.615 "io_path_stat": false, 00:26:38.615 "allow_accel_sequence": false, 00:26:38.615 "rdma_max_cq_size": 0, 00:26:38.615 "rdma_cm_event_timeout_ms": 0, 00:26:38.615 "dhchap_digests": [ 00:26:38.615 "sha256", 00:26:38.615 "sha384", 00:26:38.615 "sha512" 00:26:38.615 ], 00:26:38.615 "dhchap_dhgroups": [ 00:26:38.615 "null", 00:26:38.615 "ffdhe2048", 00:26:38.615 "ffdhe3072", 00:26:38.615 "ffdhe4096", 00:26:38.615 "ffdhe6144", 00:26:38.615 "ffdhe8192" 00:26:38.615 ] 00:26:38.615 } 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "method": "bdev_nvme_set_hotplug", 00:26:38.615 "params": { 00:26:38.615 "period_us": 100000, 00:26:38.615 "enable": false 00:26:38.615 } 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "method": "bdev_wait_for_examine" 00:26:38.615 } 00:26:38.615 ] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "scsi", 00:26:38.615 "config": null 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "scheduler", 00:26:38.615 "config": [ 00:26:38.615 { 00:26:38.615 "method": "framework_set_scheduler", 00:26:38.615 "params": { 00:26:38.615 "name": "static" 00:26:38.615 } 00:26:38.615 } 00:26:38.615 ] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "vhost_scsi", 00:26:38.615 "config": [] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "vhost_blk", 00:26:38.615 "config": [] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "ublk", 00:26:38.615 "config": [] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "nbd", 00:26:38.615 "config": [] 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "subsystem": "nvmf", 00:26:38.615 "config": [ 00:26:38.615 { 00:26:38.615 "method": "nvmf_set_config", 00:26:38.615 "params": { 00:26:38.615 "discovery_filter": "match_any", 00:26:38.615 "admin_cmd_passthru": { 00:26:38.615 "identify_ctrlr": false 00:26:38.615 }, 00:26:38.615 "dhchap_digests": [ 00:26:38.615 "sha256", 00:26:38.615 "sha384", 00:26:38.615 "sha512" 00:26:38.615 ], 00:26:38.615 "dhchap_dhgroups": [ 00:26:38.615 "null", 00:26:38.615 "ffdhe2048", 00:26:38.615 "ffdhe3072", 00:26:38.615 "ffdhe4096", 00:26:38.615 "ffdhe6144", 00:26:38.615 "ffdhe8192" 00:26:38.615 ] 00:26:38.615 } 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "method": "nvmf_set_max_subsystems", 00:26:38.615 "params": { 00:26:38.615 "max_subsystems": 1024 00:26:38.615 } 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "method": "nvmf_set_crdt", 00:26:38.615 "params": { 00:26:38.615 "crdt1": 0, 00:26:38.615 "crdt2": 0, 00:26:38.615 "crdt3": 0 00:26:38.615 } 00:26:38.615 }, 00:26:38.615 { 00:26:38.615 "method": "nvmf_create_transport", 00:26:38.615 "params": { 00:26:38.615 "trtype": "TCP", 00:26:38.615 "max_queue_depth": 128, 00:26:38.615 "max_io_qpairs_per_ctrlr": 127, 00:26:38.615 "in_capsule_data_size": 4096, 00:26:38.615 "max_io_size": 131072, 00:26:38.615 "io_unit_size": 131072, 00:26:38.615 "max_aq_depth": 128, 00:26:38.615 "num_shared_buffers": 511, 00:26:38.615 "buf_cache_size": 4294967295, 00:26:38.615 "dif_insert_or_strip": false, 00:26:38.615 "zcopy": false, 00:26:38.615 "c2h_success": true, 00:26:38.615 "sock_priority": 0, 00:26:38.615 "abort_timeout_sec": 1, 00:26:38.615 "ack_timeout": 0, 00:26:38.616 "data_wr_pool_size": 0 00:26:38.616 } 00:26:38.616 } 00:26:38.616 ] 00:26:38.616 }, 00:26:38.616 { 00:26:38.616 "subsystem": "iscsi", 00:26:38.616 "config": [ 00:26:38.616 { 00:26:38.616 "method": "iscsi_set_options", 00:26:38.616 "params": { 00:26:38.616 "node_base": "iqn.2016-06.io.spdk", 00:26:38.616 "max_sessions": 128, 00:26:38.616 "max_connections_per_session": 2, 00:26:38.616 "max_queue_depth": 64, 00:26:38.616 "default_time2wait": 2, 00:26:38.616 "default_time2retain": 20, 00:26:38.616 "first_burst_length": 8192, 00:26:38.616 "immediate_data": true, 00:26:38.616 "allow_duplicated_isid": false, 00:26:38.616 "error_recovery_level": 0, 00:26:38.616 "nop_timeout": 60, 00:26:38.616 "nop_in_interval": 30, 00:26:38.616 "disable_chap": false, 00:26:38.616 "require_chap": false, 00:26:38.616 "mutual_chap": false, 00:26:38.616 "chap_group": 0, 00:26:38.616 "max_large_datain_per_connection": 64, 00:26:38.616 "max_r2t_per_connection": 4, 00:26:38.616 "pdu_pool_size": 36864, 00:26:38.616 "immediate_data_pool_size": 16384, 00:26:38.616 "data_out_pool_size": 2048 00:26:38.616 } 00:26:38.616 } 00:26:38.616 ] 00:26:38.616 } 00:26:38.616 ] 00:26:38.616 } 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58505 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58505 ']' 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58505 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58505 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:38.616 killing process with pid 58505 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58505' 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58505 00:26:38.616 13:21:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58505 00:26:41.902 13:21:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58572 00:26:41.902 13:21:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:41.902 13:21:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:26:47.174 13:21:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58572 00:26:47.174 13:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58572 ']' 00:26:47.174 13:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58572 00:26:47.174 13:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:26:47.174 13:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:47.174 13:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58572 00:26:47.174 killing process with pid 58572 00:26:47.174 13:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:47.174 13:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:47.174 13:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58572' 00:26:47.174 13:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58572 00:26:47.174 13:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58572 00:26:49.722 13:21:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:49.722 13:21:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:49.722 ************************************ 00:26:49.722 END TEST skip_rpc_with_json 00:26:49.722 ************************************ 00:26:49.722 00:26:49.722 real 0m13.153s 00:26:49.722 user 0m12.154s 00:26:49.722 sys 0m1.435s 00:26:49.722 13:21:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.722 13:21:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:49.980 13:21:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:26:49.980 13:21:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:49.980 13:21:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.980 13:21:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:49.980 ************************************ 00:26:49.980 START TEST skip_rpc_with_delay 00:26:49.980 ************************************ 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:26:49.980 13:21:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:49.980 [2024-12-06 13:21:43.015782] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:26:50.239 ************************************ 00:26:50.239 END TEST skip_rpc_with_delay 00:26:50.239 ************************************ 00:26:50.239 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:26:50.239 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:50.239 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:50.239 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:50.239 00:26:50.239 real 0m0.272s 00:26:50.239 user 0m0.142s 00:26:50.239 sys 0m0.126s 00:26:50.239 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:50.239 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:26:50.239 13:21:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:26:50.239 13:21:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:26:50.239 13:21:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:26:50.239 13:21:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:50.239 13:21:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:50.239 13:21:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:50.239 ************************************ 00:26:50.239 START TEST exit_on_failed_rpc_init 00:26:50.239 ************************************ 00:26:50.239 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:26:50.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.239 13:21:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58711 00:26:50.239 13:21:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58711 00:26:50.239 13:21:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:50.239 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58711 ']' 00:26:50.239 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.239 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.239 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.239 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.239 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:26:50.497 [2024-12-06 13:21:43.359726] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:26:50.497 [2024-12-06 13:21:43.359923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58711 ] 00:26:50.497 [2024-12-06 13:21:43.575561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.756 [2024-12-06 13:21:43.777885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:26:52.136 13:21:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:52.136 [2024-12-06 13:21:45.147897] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:26:52.136 [2024-12-06 13:21:45.148416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58740 ] 00:26:52.393 [2024-12-06 13:21:45.362195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.652 [2024-12-06 13:21:45.563559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.652 [2024-12-06 13:21:45.563698] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:52.652 [2024-12-06 13:21:45.563719] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:52.652 [2024-12-06 13:21:45.563761] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58711 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58711 ']' 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58711 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58711 00:26:52.928 killing process with pid 58711 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58711' 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58711 00:26:52.928 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58711 00:26:56.211 00:26:56.211 real 0m5.844s 00:26:56.211 user 0m6.178s 00:26:56.211 sys 0m1.034s 00:26:56.211 ************************************ 00:26:56.211 END TEST exit_on_failed_rpc_init 00:26:56.211 ************************************ 00:26:56.211 13:21:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.211 13:21:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:26:56.211 13:21:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:56.211 00:26:56.211 real 0m27.896s 00:26:56.211 user 0m26.052s 00:26:56.211 sys 0m3.536s 00:26:56.211 13:21:49 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.211 13:21:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:56.211 ************************************ 00:26:56.211 END TEST skip_rpc 00:26:56.211 ************************************ 00:26:56.211 13:21:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:26:56.211 13:21:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:56.211 13:21:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.211 13:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:56.211 ************************************ 00:26:56.211 START TEST rpc_client 00:26:56.211 ************************************ 00:26:56.211 13:21:49 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:26:56.211 * Looking for test storage... 00:26:56.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:26:56.211 13:21:49 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:56.211 13:21:49 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:56.211 13:21:49 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:26:56.471 13:21:49 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.471 13:21:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:26:56.471 13:21:49 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.471 13:21:49 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:56.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.471 --rc genhtml_branch_coverage=1 00:26:56.471 --rc genhtml_function_coverage=1 00:26:56.471 --rc genhtml_legend=1 00:26:56.471 --rc geninfo_all_blocks=1 00:26:56.471 --rc geninfo_unexecuted_blocks=1 00:26:56.471 00:26:56.471 ' 00:26:56.471 13:21:49 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:56.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.471 --rc genhtml_branch_coverage=1 00:26:56.471 --rc genhtml_function_coverage=1 00:26:56.471 --rc genhtml_legend=1 00:26:56.471 --rc geninfo_all_blocks=1 00:26:56.471 --rc geninfo_unexecuted_blocks=1 00:26:56.471 00:26:56.471 ' 00:26:56.471 13:21:49 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:56.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.471 --rc genhtml_branch_coverage=1 00:26:56.471 --rc genhtml_function_coverage=1 00:26:56.471 --rc genhtml_legend=1 00:26:56.471 --rc geninfo_all_blocks=1 00:26:56.471 --rc geninfo_unexecuted_blocks=1 00:26:56.471 00:26:56.471 ' 00:26:56.471 13:21:49 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:56.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.471 --rc genhtml_branch_coverage=1 00:26:56.471 --rc genhtml_function_coverage=1 00:26:56.471 --rc genhtml_legend=1 00:26:56.471 --rc geninfo_all_blocks=1 00:26:56.471 --rc geninfo_unexecuted_blocks=1 00:26:56.471 00:26:56.471 ' 00:26:56.471 13:21:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:26:56.471 OK 00:26:56.471 13:21:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:26:56.471 00:26:56.471 real 0m0.318s 00:26:56.471 user 0m0.163s 00:26:56.471 sys 0m0.163s 00:26:56.471 ************************************ 00:26:56.471 END TEST rpc_client 00:26:56.471 ************************************ 00:26:56.471 13:21:49 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.471 13:21:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:26:56.471 13:21:49 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:26:56.471 13:21:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:56.471 13:21:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.471 13:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:56.471 ************************************ 00:26:56.471 START TEST json_config 00:26:56.471 ************************************ 00:26:56.471 13:21:49 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:26:56.731 13:21:49 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:56.731 13:21:49 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:56.731 13:21:49 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:26:56.731 13:21:49 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:56.732 13:21:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.732 13:21:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.732 13:21:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.732 13:21:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.732 13:21:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.732 13:21:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.732 13:21:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.732 13:21:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.732 13:21:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.732 13:21:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.732 13:21:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.732 13:21:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:26:56.732 13:21:49 json_config -- scripts/common.sh@345 -- # : 1 00:26:56.732 13:21:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.732 13:21:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.732 13:21:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:26:56.732 13:21:49 json_config -- scripts/common.sh@353 -- # local d=1 00:26:56.732 13:21:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.732 13:21:49 json_config -- scripts/common.sh@355 -- # echo 1 00:26:56.732 13:21:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.732 13:21:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:26:56.732 13:21:49 json_config -- scripts/common.sh@353 -- # local d=2 00:26:56.732 13:21:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.732 13:21:49 json_config -- scripts/common.sh@355 -- # echo 2 00:26:56.732 13:21:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.732 13:21:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.732 13:21:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.732 13:21:49 json_config -- scripts/common.sh@368 -- # return 0 00:26:56.732 13:21:49 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.732 13:21:49 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:56.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.732 --rc genhtml_branch_coverage=1 00:26:56.732 --rc genhtml_function_coverage=1 00:26:56.732 --rc genhtml_legend=1 00:26:56.732 --rc geninfo_all_blocks=1 00:26:56.732 --rc geninfo_unexecuted_blocks=1 00:26:56.732 00:26:56.732 ' 00:26:56.732 13:21:49 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:56.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.732 --rc genhtml_branch_coverage=1 00:26:56.732 --rc genhtml_function_coverage=1 00:26:56.732 --rc genhtml_legend=1 00:26:56.732 --rc geninfo_all_blocks=1 00:26:56.732 --rc geninfo_unexecuted_blocks=1 00:26:56.732 00:26:56.732 ' 00:26:56.732 13:21:49 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:56.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.732 --rc genhtml_branch_coverage=1 00:26:56.732 --rc genhtml_function_coverage=1 00:26:56.732 --rc genhtml_legend=1 00:26:56.732 --rc geninfo_all_blocks=1 00:26:56.732 --rc geninfo_unexecuted_blocks=1 00:26:56.732 00:26:56.732 ' 00:26:56.732 13:21:49 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:56.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.732 --rc genhtml_branch_coverage=1 00:26:56.732 --rc genhtml_function_coverage=1 00:26:56.732 --rc genhtml_legend=1 00:26:56.732 --rc geninfo_all_blocks=1 00:26:56.732 --rc geninfo_unexecuted_blocks=1 00:26:56.732 00:26:56.732 ' 00:26:56.732 13:21:49 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c818d8d4-9664-40ed-b0e6-117acd044092 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c818d8d4-9664-40ed-b0e6-117acd044092 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:56.732 13:21:49 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.732 13:21:49 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.732 13:21:49 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.732 13:21:49 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.732 13:21:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.732 13:21:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.732 13:21:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.732 13:21:49 json_config -- paths/export.sh@5 -- # export PATH 00:26:56.732 13:21:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@51 -- # : 0 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.732 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.732 13:21:49 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.732 13:21:49 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:26:56.732 13:21:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:26:56.732 13:21:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:26:56.732 13:21:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:26:56.732 13:21:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:26:56.732 13:21:49 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:26:56.732 WARNING: No tests are enabled so not running JSON configuration tests 00:26:56.732 13:21:49 json_config -- json_config/json_config.sh@28 -- # exit 0 00:26:56.732 00:26:56.732 real 0m0.215s 00:26:56.732 user 0m0.134s 00:26:56.732 sys 0m0.084s 00:26:56.732 ************************************ 00:26:56.732 END TEST json_config 00:26:56.732 ************************************ 00:26:56.732 13:21:49 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.732 13:21:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:56.732 13:21:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:26:56.732 13:21:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:56.732 13:21:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.732 13:21:49 -- common/autotest_common.sh@10 -- # set +x 00:26:56.732 ************************************ 00:26:56.732 START TEST json_config_extra_key 00:26:56.732 ************************************ 00:26:56.732 13:21:49 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:26:56.992 13:21:49 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:56.992 13:21:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:26:56.992 13:21:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:56.992 13:21:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.992 13:21:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:26:56.992 13:21:49 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.992 13:21:49 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:56.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.992 --rc genhtml_branch_coverage=1 00:26:56.992 --rc genhtml_function_coverage=1 00:26:56.992 --rc genhtml_legend=1 00:26:56.992 --rc geninfo_all_blocks=1 00:26:56.992 --rc geninfo_unexecuted_blocks=1 00:26:56.992 00:26:56.992 ' 00:26:56.992 13:21:49 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:56.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.992 --rc genhtml_branch_coverage=1 00:26:56.992 --rc genhtml_function_coverage=1 00:26:56.992 --rc genhtml_legend=1 00:26:56.992 --rc geninfo_all_blocks=1 00:26:56.992 --rc geninfo_unexecuted_blocks=1 00:26:56.992 00:26:56.992 ' 00:26:56.992 13:21:49 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:56.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.992 --rc genhtml_branch_coverage=1 00:26:56.992 --rc genhtml_function_coverage=1 00:26:56.992 --rc genhtml_legend=1 00:26:56.992 --rc geninfo_all_blocks=1 00:26:56.992 --rc geninfo_unexecuted_blocks=1 00:26:56.992 00:26:56.992 ' 00:26:56.992 13:21:49 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:56.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.992 --rc genhtml_branch_coverage=1 00:26:56.992 --rc genhtml_function_coverage=1 00:26:56.992 --rc genhtml_legend=1 00:26:56.992 --rc geninfo_all_blocks=1 00:26:56.992 --rc geninfo_unexecuted_blocks=1 00:26:56.992 00:26:56.992 ' 00:26:56.992 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:56.992 13:21:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:26:56.992 13:21:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.992 13:21:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.992 13:21:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.992 13:21:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.992 13:21:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.992 13:21:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.992 13:21:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.992 13:21:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.992 13:21:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.992 13:21:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.992 13:21:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c818d8d4-9664-40ed-b0e6-117acd044092 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c818d8d4-9664-40ed-b0e6-117acd044092 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:56.993 13:21:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.993 13:21:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.993 13:21:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.993 13:21:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.993 13:21:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.993 13:21:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.993 13:21:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.993 13:21:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:26:56.993 13:21:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.993 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.993 13:21:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:26:56.993 INFO: launching applications... 00:26:56.993 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:26:56.993 13:21:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:26:56.993 13:21:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:26:56.993 13:21:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:26:56.993 13:21:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:26:56.993 13:21:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:26:56.993 13:21:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:56.993 13:21:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:56.993 13:21:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58961 00:26:56.993 13:21:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:26:56.993 Waiting for target to run... 00:26:56.993 13:21:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58961 /var/tmp/spdk_tgt.sock 00:26:56.993 13:21:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:26:56.993 13:21:50 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58961 ']' 00:26:56.993 13:21:50 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:26:56.993 13:21:50 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:26:56.993 13:21:50 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:26:56.993 13:21:50 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.993 13:21:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:26:57.251 [2024-12-06 13:21:50.194867] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:26:57.251 [2024-12-06 13:21:50.195377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58961 ] 00:26:57.817 [2024-12-06 13:21:50.821031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.075 [2024-12-06 13:21:50.955598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.024 00:26:59.024 INFO: shutting down applications... 00:26:59.024 13:21:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.024 13:21:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:26:59.024 13:21:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:26:59.024 13:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:26:59.024 13:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:26:59.024 13:21:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:26:59.024 13:21:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:26:59.024 13:21:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58961 ]] 00:26:59.024 13:21:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58961 00:26:59.024 13:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:26:59.024 13:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:59.024 13:21:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58961 00:26:59.024 13:21:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:59.282 13:21:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:59.282 13:21:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:59.282 13:21:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58961 00:26:59.282 13:21:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:59.849 13:21:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:59.849 13:21:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:59.849 13:21:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58961 00:26:59.849 13:21:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:00.479 13:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:00.479 13:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:00.479 13:21:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58961 00:27:00.479 13:21:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:01.044 13:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:01.044 13:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:01.044 13:21:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58961 00:27:01.044 13:21:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:01.302 13:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:01.302 13:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:01.302 13:21:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58961 00:27:01.302 13:21:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:01.867 13:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:01.867 13:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:01.867 13:21:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58961 00:27:01.867 13:21:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:02.433 13:21:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:02.433 13:21:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:02.433 13:21:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58961 00:27:02.433 13:21:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:27:02.433 13:21:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:27:02.433 SPDK target shutdown done 00:27:02.433 Success 00:27:02.433 13:21:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:27:02.433 13:21:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:27:02.433 13:21:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:27:02.433 00:27:02.433 real 0m5.601s 00:27:02.433 user 0m5.164s 00:27:02.433 sys 0m0.932s 00:27:02.433 ************************************ 00:27:02.433 END TEST json_config_extra_key 00:27:02.433 ************************************ 00:27:02.433 13:21:55 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.433 13:21:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:27:02.433 13:21:55 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:27:02.433 13:21:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:02.433 13:21:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.433 13:21:55 -- common/autotest_common.sh@10 -- # set +x 00:27:02.433 ************************************ 00:27:02.433 START TEST alias_rpc 00:27:02.433 ************************************ 00:27:02.433 13:21:55 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:27:02.691 * Looking for test storage... 00:27:02.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:27:02.691 13:21:55 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:02.691 13:21:55 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:02.691 13:21:55 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:27:02.691 13:21:55 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:27:02.691 13:21:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:27:02.692 13:21:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.692 13:21:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:27:02.692 13:21:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:02.692 13:21:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:27:02.692 13:21:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:27:02.692 13:21:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.692 13:21:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:27:02.692 13:21:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:02.692 13:21:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:02.692 13:21:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:02.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.692 13:21:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:27:02.692 13:21:55 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.692 13:21:55 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.692 --rc genhtml_branch_coverage=1 00:27:02.692 --rc genhtml_function_coverage=1 00:27:02.692 --rc genhtml_legend=1 00:27:02.692 --rc geninfo_all_blocks=1 00:27:02.692 --rc geninfo_unexecuted_blocks=1 00:27:02.692 00:27:02.692 ' 00:27:02.692 13:21:55 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.692 --rc genhtml_branch_coverage=1 00:27:02.692 --rc genhtml_function_coverage=1 00:27:02.692 --rc genhtml_legend=1 00:27:02.692 --rc geninfo_all_blocks=1 00:27:02.692 --rc geninfo_unexecuted_blocks=1 00:27:02.692 00:27:02.692 ' 00:27:02.692 13:21:55 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.692 --rc genhtml_branch_coverage=1 00:27:02.692 --rc genhtml_function_coverage=1 00:27:02.692 --rc genhtml_legend=1 00:27:02.692 --rc geninfo_all_blocks=1 00:27:02.692 --rc geninfo_unexecuted_blocks=1 00:27:02.692 00:27:02.692 ' 00:27:02.692 13:21:55 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.692 --rc genhtml_branch_coverage=1 00:27:02.692 --rc genhtml_function_coverage=1 00:27:02.692 --rc genhtml_legend=1 00:27:02.692 --rc geninfo_all_blocks=1 00:27:02.692 --rc geninfo_unexecuted_blocks=1 00:27:02.692 00:27:02.692 ' 00:27:02.692 13:21:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:27:02.692 13:21:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59079 00:27:02.692 13:21:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:02.692 13:21:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59079 00:27:02.692 13:21:55 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59079 ']' 00:27:02.692 13:21:55 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.692 13:21:55 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.692 13:21:55 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.692 13:21:55 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.692 13:21:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:02.950 [2024-12-06 13:21:55.810745] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:27:02.950 [2024-12-06 13:21:55.811273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59079 ] 00:27:02.950 [2024-12-06 13:21:56.016833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.210 [2024-12-06 13:21:56.178186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.644 13:21:57 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.644 13:21:57 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:27:04.644 13:21:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:27:04.644 13:21:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59079 00:27:04.644 13:21:57 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59079 ']' 00:27:04.644 13:21:57 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59079 00:27:04.644 13:21:57 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:27:04.645 13:21:57 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.645 13:21:57 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59079 00:27:04.645 killing process with pid 59079 00:27:04.645 13:21:57 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:04.645 13:21:57 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:04.645 13:21:57 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59079' 00:27:04.645 13:21:57 alias_rpc -- common/autotest_common.sh@973 -- # kill 59079 00:27:04.645 13:21:57 alias_rpc -- common/autotest_common.sh@978 -- # wait 59079 00:27:07.931 ************************************ 00:27:07.931 END TEST alias_rpc 00:27:07.931 ************************************ 00:27:07.931 00:27:07.931 real 0m5.386s 00:27:07.931 user 0m5.320s 00:27:07.931 sys 0m0.898s 00:27:07.931 13:22:00 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:07.931 13:22:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:07.931 13:22:00 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:27:07.931 13:22:00 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:27:07.931 13:22:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:07.931 13:22:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:07.931 13:22:00 -- common/autotest_common.sh@10 -- # set +x 00:27:07.931 ************************************ 00:27:07.931 START TEST spdkcli_tcp 00:27:07.931 ************************************ 00:27:07.931 13:22:00 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:27:07.931 * Looking for test storage... 00:27:07.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:27:07.931 13:22:00 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:07.931 13:22:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:27:07.931 13:22:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.190 13:22:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:08.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.190 --rc genhtml_branch_coverage=1 00:27:08.190 --rc genhtml_function_coverage=1 00:27:08.190 --rc genhtml_legend=1 00:27:08.190 --rc geninfo_all_blocks=1 00:27:08.190 --rc geninfo_unexecuted_blocks=1 00:27:08.190 00:27:08.190 ' 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:08.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.190 --rc genhtml_branch_coverage=1 00:27:08.190 --rc genhtml_function_coverage=1 00:27:08.190 --rc genhtml_legend=1 00:27:08.190 --rc geninfo_all_blocks=1 00:27:08.190 --rc geninfo_unexecuted_blocks=1 00:27:08.190 00:27:08.190 ' 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:08.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.190 --rc genhtml_branch_coverage=1 00:27:08.190 --rc genhtml_function_coverage=1 00:27:08.190 --rc genhtml_legend=1 00:27:08.190 --rc geninfo_all_blocks=1 00:27:08.190 --rc geninfo_unexecuted_blocks=1 00:27:08.190 00:27:08.190 ' 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:08.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.190 --rc genhtml_branch_coverage=1 00:27:08.190 --rc genhtml_function_coverage=1 00:27:08.190 --rc genhtml_legend=1 00:27:08.190 --rc geninfo_all_blocks=1 00:27:08.190 --rc geninfo_unexecuted_blocks=1 00:27:08.190 00:27:08.190 ' 00:27:08.190 13:22:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:27:08.190 13:22:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:27:08.190 13:22:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:27:08.190 13:22:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:27:08.190 13:22:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:27:08.190 13:22:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:08.190 13:22:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:08.190 13:22:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59203 00:27:08.190 13:22:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:27:08.190 13:22:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59203 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59203 ']' 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:08.190 13:22:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:08.190 [2024-12-06 13:22:01.274343] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:27:08.190 [2024-12-06 13:22:01.274799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59203 ] 00:27:08.448 [2024-12-06 13:22:01.466687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:08.706 [2024-12-06 13:22:01.640600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.706 [2024-12-06 13:22:01.640624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.082 13:22:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.082 13:22:02 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:27:10.082 13:22:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59228 00:27:10.082 13:22:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:27:10.082 13:22:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:27:10.082 [ 00:27:10.082 "bdev_malloc_delete", 00:27:10.082 "bdev_malloc_create", 00:27:10.082 "bdev_null_resize", 00:27:10.082 "bdev_null_delete", 00:27:10.082 "bdev_null_create", 00:27:10.082 "bdev_nvme_cuse_unregister", 00:27:10.082 "bdev_nvme_cuse_register", 00:27:10.082 "bdev_opal_new_user", 00:27:10.082 "bdev_opal_set_lock_state", 00:27:10.082 "bdev_opal_delete", 00:27:10.082 "bdev_opal_get_info", 00:27:10.082 "bdev_opal_create", 00:27:10.082 "bdev_nvme_opal_revert", 00:27:10.082 "bdev_nvme_opal_init", 00:27:10.082 "bdev_nvme_send_cmd", 00:27:10.082 "bdev_nvme_set_keys", 00:27:10.082 "bdev_nvme_get_path_iostat", 00:27:10.082 "bdev_nvme_get_mdns_discovery_info", 00:27:10.082 "bdev_nvme_stop_mdns_discovery", 00:27:10.082 "bdev_nvme_start_mdns_discovery", 00:27:10.082 "bdev_nvme_set_multipath_policy", 00:27:10.082 "bdev_nvme_set_preferred_path", 00:27:10.082 "bdev_nvme_get_io_paths", 00:27:10.083 "bdev_nvme_remove_error_injection", 00:27:10.083 "bdev_nvme_add_error_injection", 00:27:10.083 "bdev_nvme_get_discovery_info", 00:27:10.083 "bdev_nvme_stop_discovery", 00:27:10.083 "bdev_nvme_start_discovery", 00:27:10.083 "bdev_nvme_get_controller_health_info", 00:27:10.083 "bdev_nvme_disable_controller", 00:27:10.083 "bdev_nvme_enable_controller", 00:27:10.083 "bdev_nvme_reset_controller", 00:27:10.083 "bdev_nvme_get_transport_statistics", 00:27:10.083 "bdev_nvme_apply_firmware", 00:27:10.083 "bdev_nvme_detach_controller", 00:27:10.083 "bdev_nvme_get_controllers", 00:27:10.083 "bdev_nvme_attach_controller", 00:27:10.083 "bdev_nvme_set_hotplug", 00:27:10.083 "bdev_nvme_set_options", 00:27:10.083 "bdev_passthru_delete", 00:27:10.083 "bdev_passthru_create", 00:27:10.083 "bdev_lvol_set_parent_bdev", 00:27:10.083 "bdev_lvol_set_parent", 00:27:10.083 "bdev_lvol_check_shallow_copy", 00:27:10.083 "bdev_lvol_start_shallow_copy", 00:27:10.083 "bdev_lvol_grow_lvstore", 00:27:10.083 "bdev_lvol_get_lvols", 00:27:10.083 "bdev_lvol_get_lvstores", 00:27:10.083 "bdev_lvol_delete", 00:27:10.083 "bdev_lvol_set_read_only", 00:27:10.083 "bdev_lvol_resize", 00:27:10.083 "bdev_lvol_decouple_parent", 00:27:10.083 "bdev_lvol_inflate", 00:27:10.083 "bdev_lvol_rename", 00:27:10.083 "bdev_lvol_clone_bdev", 00:27:10.083 "bdev_lvol_clone", 00:27:10.083 "bdev_lvol_snapshot", 00:27:10.083 "bdev_lvol_create", 00:27:10.083 "bdev_lvol_delete_lvstore", 00:27:10.083 "bdev_lvol_rename_lvstore", 00:27:10.083 "bdev_lvol_create_lvstore", 00:27:10.083 "bdev_raid_set_options", 00:27:10.083 "bdev_raid_remove_base_bdev", 00:27:10.083 "bdev_raid_add_base_bdev", 00:27:10.083 "bdev_raid_delete", 00:27:10.083 "bdev_raid_create", 00:27:10.083 "bdev_raid_get_bdevs", 00:27:10.083 "bdev_error_inject_error", 00:27:10.083 "bdev_error_delete", 00:27:10.083 "bdev_error_create", 00:27:10.083 "bdev_split_delete", 00:27:10.083 "bdev_split_create", 00:27:10.083 "bdev_delay_delete", 00:27:10.083 "bdev_delay_create", 00:27:10.083 "bdev_delay_update_latency", 00:27:10.083 "bdev_zone_block_delete", 00:27:10.083 "bdev_zone_block_create", 00:27:10.083 "blobfs_create", 00:27:10.083 "blobfs_detect", 00:27:10.083 "blobfs_set_cache_size", 00:27:10.083 "bdev_xnvme_delete", 00:27:10.083 "bdev_xnvme_create", 00:27:10.083 "bdev_aio_delete", 00:27:10.083 "bdev_aio_rescan", 00:27:10.083 "bdev_aio_create", 00:27:10.083 "bdev_ftl_set_property", 00:27:10.083 "bdev_ftl_get_properties", 00:27:10.083 "bdev_ftl_get_stats", 00:27:10.083 "bdev_ftl_unmap", 00:27:10.083 "bdev_ftl_unload", 00:27:10.083 "bdev_ftl_delete", 00:27:10.083 "bdev_ftl_load", 00:27:10.083 "bdev_ftl_create", 00:27:10.083 "bdev_virtio_attach_controller", 00:27:10.083 "bdev_virtio_scsi_get_devices", 00:27:10.083 "bdev_virtio_detach_controller", 00:27:10.083 "bdev_virtio_blk_set_hotplug", 00:27:10.083 "bdev_iscsi_delete", 00:27:10.083 "bdev_iscsi_create", 00:27:10.083 "bdev_iscsi_set_options", 00:27:10.083 "accel_error_inject_error", 00:27:10.083 "ioat_scan_accel_module", 00:27:10.083 "dsa_scan_accel_module", 00:27:10.083 "iaa_scan_accel_module", 00:27:10.083 "keyring_file_remove_key", 00:27:10.083 "keyring_file_add_key", 00:27:10.083 "keyring_linux_set_options", 00:27:10.083 "fsdev_aio_delete", 00:27:10.083 "fsdev_aio_create", 00:27:10.083 "iscsi_get_histogram", 00:27:10.083 "iscsi_enable_histogram", 00:27:10.083 "iscsi_set_options", 00:27:10.083 "iscsi_get_auth_groups", 00:27:10.083 "iscsi_auth_group_remove_secret", 00:27:10.083 "iscsi_auth_group_add_secret", 00:27:10.083 "iscsi_delete_auth_group", 00:27:10.083 "iscsi_create_auth_group", 00:27:10.083 "iscsi_set_discovery_auth", 00:27:10.083 "iscsi_get_options", 00:27:10.083 "iscsi_target_node_request_logout", 00:27:10.083 "iscsi_target_node_set_redirect", 00:27:10.083 "iscsi_target_node_set_auth", 00:27:10.083 "iscsi_target_node_add_lun", 00:27:10.083 "iscsi_get_stats", 00:27:10.083 "iscsi_get_connections", 00:27:10.083 "iscsi_portal_group_set_auth", 00:27:10.083 "iscsi_start_portal_group", 00:27:10.083 "iscsi_delete_portal_group", 00:27:10.083 "iscsi_create_portal_group", 00:27:10.083 "iscsi_get_portal_groups", 00:27:10.083 "iscsi_delete_target_node", 00:27:10.083 "iscsi_target_node_remove_pg_ig_maps", 00:27:10.083 "iscsi_target_node_add_pg_ig_maps", 00:27:10.083 "iscsi_create_target_node", 00:27:10.083 "iscsi_get_target_nodes", 00:27:10.083 "iscsi_delete_initiator_group", 00:27:10.083 "iscsi_initiator_group_remove_initiators", 00:27:10.083 "iscsi_initiator_group_add_initiators", 00:27:10.083 "iscsi_create_initiator_group", 00:27:10.083 "iscsi_get_initiator_groups", 00:27:10.083 "nvmf_set_crdt", 00:27:10.083 "nvmf_set_config", 00:27:10.083 "nvmf_set_max_subsystems", 00:27:10.083 "nvmf_stop_mdns_prr", 00:27:10.083 "nvmf_publish_mdns_prr", 00:27:10.083 "nvmf_subsystem_get_listeners", 00:27:10.083 "nvmf_subsystem_get_qpairs", 00:27:10.083 "nvmf_subsystem_get_controllers", 00:27:10.083 "nvmf_get_stats", 00:27:10.083 "nvmf_get_transports", 00:27:10.083 "nvmf_create_transport", 00:27:10.083 "nvmf_get_targets", 00:27:10.083 "nvmf_delete_target", 00:27:10.083 "nvmf_create_target", 00:27:10.083 "nvmf_subsystem_allow_any_host", 00:27:10.083 "nvmf_subsystem_set_keys", 00:27:10.083 "nvmf_subsystem_remove_host", 00:27:10.083 "nvmf_subsystem_add_host", 00:27:10.083 "nvmf_ns_remove_host", 00:27:10.083 "nvmf_ns_add_host", 00:27:10.083 "nvmf_subsystem_remove_ns", 00:27:10.083 "nvmf_subsystem_set_ns_ana_group", 00:27:10.083 "nvmf_subsystem_add_ns", 00:27:10.083 "nvmf_subsystem_listener_set_ana_state", 00:27:10.083 "nvmf_discovery_get_referrals", 00:27:10.083 "nvmf_discovery_remove_referral", 00:27:10.083 "nvmf_discovery_add_referral", 00:27:10.083 "nvmf_subsystem_remove_listener", 00:27:10.083 "nvmf_subsystem_add_listener", 00:27:10.083 "nvmf_delete_subsystem", 00:27:10.083 "nvmf_create_subsystem", 00:27:10.083 "nvmf_get_subsystems", 00:27:10.083 "env_dpdk_get_mem_stats", 00:27:10.083 "nbd_get_disks", 00:27:10.083 "nbd_stop_disk", 00:27:10.083 "nbd_start_disk", 00:27:10.083 "ublk_recover_disk", 00:27:10.083 "ublk_get_disks", 00:27:10.083 "ublk_stop_disk", 00:27:10.083 "ublk_start_disk", 00:27:10.083 "ublk_destroy_target", 00:27:10.083 "ublk_create_target", 00:27:10.083 "virtio_blk_create_transport", 00:27:10.083 "virtio_blk_get_transports", 00:27:10.083 "vhost_controller_set_coalescing", 00:27:10.083 "vhost_get_controllers", 00:27:10.083 "vhost_delete_controller", 00:27:10.083 "vhost_create_blk_controller", 00:27:10.083 "vhost_scsi_controller_remove_target", 00:27:10.083 "vhost_scsi_controller_add_target", 00:27:10.083 "vhost_start_scsi_controller", 00:27:10.083 "vhost_create_scsi_controller", 00:27:10.083 "thread_set_cpumask", 00:27:10.083 "scheduler_set_options", 00:27:10.083 "framework_get_governor", 00:27:10.083 "framework_get_scheduler", 00:27:10.083 "framework_set_scheduler", 00:27:10.083 "framework_get_reactors", 00:27:10.083 "thread_get_io_channels", 00:27:10.083 "thread_get_pollers", 00:27:10.083 "thread_get_stats", 00:27:10.083 "framework_monitor_context_switch", 00:27:10.083 "spdk_kill_instance", 00:27:10.083 "log_enable_timestamps", 00:27:10.083 "log_get_flags", 00:27:10.083 "log_clear_flag", 00:27:10.083 "log_set_flag", 00:27:10.083 "log_get_level", 00:27:10.083 "log_set_level", 00:27:10.083 "log_get_print_level", 00:27:10.083 "log_set_print_level", 00:27:10.083 "framework_enable_cpumask_locks", 00:27:10.083 "framework_disable_cpumask_locks", 00:27:10.083 "framework_wait_init", 00:27:10.083 "framework_start_init", 00:27:10.083 "scsi_get_devices", 00:27:10.083 "bdev_get_histogram", 00:27:10.083 "bdev_enable_histogram", 00:27:10.083 "bdev_set_qos_limit", 00:27:10.083 "bdev_set_qd_sampling_period", 00:27:10.083 "bdev_get_bdevs", 00:27:10.083 "bdev_reset_iostat", 00:27:10.083 "bdev_get_iostat", 00:27:10.083 "bdev_examine", 00:27:10.083 "bdev_wait_for_examine", 00:27:10.083 "bdev_set_options", 00:27:10.083 "accel_get_stats", 00:27:10.083 "accel_set_options", 00:27:10.083 "accel_set_driver", 00:27:10.083 "accel_crypto_key_destroy", 00:27:10.083 "accel_crypto_keys_get", 00:27:10.083 "accel_crypto_key_create", 00:27:10.083 "accel_assign_opc", 00:27:10.083 "accel_get_module_info", 00:27:10.083 "accel_get_opc_assignments", 00:27:10.083 "vmd_rescan", 00:27:10.083 "vmd_remove_device", 00:27:10.083 "vmd_enable", 00:27:10.083 "sock_get_default_impl", 00:27:10.083 "sock_set_default_impl", 00:27:10.083 "sock_impl_set_options", 00:27:10.083 "sock_impl_get_options", 00:27:10.083 "iobuf_get_stats", 00:27:10.083 "iobuf_set_options", 00:27:10.083 "keyring_get_keys", 00:27:10.083 "framework_get_pci_devices", 00:27:10.083 "framework_get_config", 00:27:10.083 "framework_get_subsystems", 00:27:10.083 "fsdev_set_opts", 00:27:10.083 "fsdev_get_opts", 00:27:10.083 "trace_get_info", 00:27:10.083 "trace_get_tpoint_group_mask", 00:27:10.083 "trace_disable_tpoint_group", 00:27:10.083 "trace_enable_tpoint_group", 00:27:10.083 "trace_clear_tpoint_mask", 00:27:10.083 "trace_set_tpoint_mask", 00:27:10.083 "notify_get_notifications", 00:27:10.083 "notify_get_types", 00:27:10.083 "spdk_get_version", 00:27:10.083 "rpc_get_methods" 00:27:10.083 ] 00:27:10.084 13:22:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:27:10.084 13:22:03 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.084 13:22:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:10.341 13:22:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:10.341 13:22:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59203 00:27:10.341 13:22:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59203 ']' 00:27:10.341 13:22:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59203 00:27:10.341 13:22:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:27:10.341 13:22:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:10.341 13:22:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59203 00:27:10.341 killing process with pid 59203 00:27:10.341 13:22:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:10.341 13:22:03 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:10.341 13:22:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59203' 00:27:10.341 13:22:03 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59203 00:27:10.342 13:22:03 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59203 00:27:13.627 ************************************ 00:27:13.627 END TEST spdkcli_tcp 00:27:13.627 ************************************ 00:27:13.627 00:27:13.627 real 0m5.296s 00:27:13.627 user 0m9.321s 00:27:13.627 sys 0m0.928s 00:27:13.627 13:22:06 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:13.627 13:22:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:13.627 13:22:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:27:13.627 13:22:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:13.627 13:22:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:13.627 13:22:06 -- common/autotest_common.sh@10 -- # set +x 00:27:13.627 ************************************ 00:27:13.627 START TEST dpdk_mem_utility 00:27:13.627 ************************************ 00:27:13.627 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:27:13.627 * Looking for test storage... 00:27:13.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:27:13.627 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:13.627 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:27:13.627 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:13.627 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:13.628 13:22:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:27:13.628 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:13.628 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:13.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.628 --rc genhtml_branch_coverage=1 00:27:13.628 --rc genhtml_function_coverage=1 00:27:13.628 --rc genhtml_legend=1 00:27:13.628 --rc geninfo_all_blocks=1 00:27:13.628 --rc geninfo_unexecuted_blocks=1 00:27:13.628 00:27:13.628 ' 00:27:13.628 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:13.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.628 --rc genhtml_branch_coverage=1 00:27:13.628 --rc genhtml_function_coverage=1 00:27:13.628 --rc genhtml_legend=1 00:27:13.628 --rc geninfo_all_blocks=1 00:27:13.628 --rc geninfo_unexecuted_blocks=1 00:27:13.628 00:27:13.628 ' 00:27:13.628 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:13.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.628 --rc genhtml_branch_coverage=1 00:27:13.628 --rc genhtml_function_coverage=1 00:27:13.628 --rc genhtml_legend=1 00:27:13.628 --rc geninfo_all_blocks=1 00:27:13.628 --rc geninfo_unexecuted_blocks=1 00:27:13.628 00:27:13.628 ' 00:27:13.628 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:13.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.628 --rc genhtml_branch_coverage=1 00:27:13.628 --rc genhtml_function_coverage=1 00:27:13.628 --rc genhtml_legend=1 00:27:13.628 --rc geninfo_all_blocks=1 00:27:13.628 --rc geninfo_unexecuted_blocks=1 00:27:13.628 00:27:13.628 ' 00:27:13.628 13:22:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:27:13.628 13:22:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.628 13:22:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59340 00:27:13.628 13:22:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59340 00:27:13.628 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59340 ']' 00:27:13.628 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.628 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:13.628 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.628 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:13.628 13:22:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:27:13.628 [2024-12-06 13:22:06.591881] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:27:13.628 [2024-12-06 13:22:06.592073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59340 ] 00:27:13.886 [2024-12-06 13:22:06.788169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.145 [2024-12-06 13:22:06.998074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.081 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.081 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:27:15.081 13:22:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:27:15.081 13:22:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:27:15.081 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.081 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:27:15.081 { 00:27:15.081 "filename": "/tmp/spdk_mem_dump.txt" 00:27:15.081 } 00:27:15.081 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.081 13:22:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:27:15.341 DPDK memory size 824.000000 MiB in 1 heap(s) 00:27:15.341 1 heaps totaling size 824.000000 MiB 00:27:15.341 size: 824.000000 MiB heap id: 0 00:27:15.341 end heaps---------- 00:27:15.341 9 mempools totaling size 603.782043 MiB 00:27:15.341 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:27:15.341 size: 158.602051 MiB name: PDU_data_out_Pool 00:27:15.341 size: 100.555481 MiB name: bdev_io_59340 00:27:15.341 size: 50.003479 MiB name: msgpool_59340 00:27:15.341 size: 36.509338 MiB name: fsdev_io_59340 00:27:15.341 size: 21.763794 MiB name: PDU_Pool 00:27:15.341 size: 19.513306 MiB name: SCSI_TASK_Pool 00:27:15.341 size: 4.133484 MiB name: evtpool_59340 00:27:15.341 size: 0.026123 MiB name: Session_Pool 00:27:15.341 end mempools------- 00:27:15.341 6 memzones totaling size 4.142822 MiB 00:27:15.341 size: 1.000366 MiB name: RG_ring_0_59340 00:27:15.341 size: 1.000366 MiB name: RG_ring_1_59340 00:27:15.341 size: 1.000366 MiB name: RG_ring_4_59340 00:27:15.341 size: 1.000366 MiB name: RG_ring_5_59340 00:27:15.341 size: 0.125366 MiB name: RG_ring_2_59340 00:27:15.341 size: 0.015991 MiB name: RG_ring_3_59340 00:27:15.341 end memzones------- 00:27:15.341 13:22:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:27:15.341 heap id: 0 total size: 824.000000 MiB number of busy elements: 324 number of free elements: 18 00:27:15.341 list of free elements. size: 16.779175 MiB 00:27:15.341 element at address: 0x200006400000 with size: 1.995972 MiB 00:27:15.341 element at address: 0x20000a600000 with size: 1.995972 MiB 00:27:15.341 element at address: 0x200003e00000 with size: 1.991028 MiB 00:27:15.341 element at address: 0x200019500040 with size: 0.999939 MiB 00:27:15.341 element at address: 0x200019900040 with size: 0.999939 MiB 00:27:15.341 element at address: 0x200019a00000 with size: 0.999084 MiB 00:27:15.341 element at address: 0x200032600000 with size: 0.994324 MiB 00:27:15.341 element at address: 0x200000400000 with size: 0.992004 MiB 00:27:15.341 element at address: 0x200019200000 with size: 0.959656 MiB 00:27:15.341 element at address: 0x200019d00040 with size: 0.936401 MiB 00:27:15.341 element at address: 0x200000200000 with size: 0.716980 MiB 00:27:15.341 element at address: 0x20001b400000 with size: 0.560486 MiB 00:27:15.341 element at address: 0x200000c00000 with size: 0.489197 MiB 00:27:15.341 element at address: 0x200019600000 with size: 0.487976 MiB 00:27:15.341 element at address: 0x200019e00000 with size: 0.485413 MiB 00:27:15.341 element at address: 0x200012c00000 with size: 0.433472 MiB 00:27:15.341 element at address: 0x200028800000 with size: 0.390442 MiB 00:27:15.341 element at address: 0x200000800000 with size: 0.350891 MiB 00:27:15.341 list of standard malloc elements. size: 199.289917 MiB 00:27:15.341 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:27:15.341 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:27:15.341 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:27:15.341 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:27:15.341 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:27:15.341 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:27:15.341 element at address: 0x200019deff40 with size: 0.062683 MiB 00:27:15.341 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:27:15.341 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:27:15.341 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:27:15.341 element at address: 0x200012bff040 with size: 0.000305 MiB 00:27:15.341 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:27:15.341 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:27:15.342 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:27:15.342 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200000cff000 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bff180 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bff280 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bff380 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bff480 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bff580 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bff680 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bff780 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bff880 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bff980 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200019affc40 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:27:15.342 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:27:15.343 element at address: 0x200028863f40 with size: 0.000244 MiB 00:27:15.343 element at address: 0x200028864040 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886af80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886b080 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886b180 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886b280 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886b380 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886b480 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886b580 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886b680 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886b780 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886b880 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886b980 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886be80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886c080 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886c180 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886c280 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886c380 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886c480 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886c580 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886c680 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886c780 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886c880 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886c980 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886d080 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886d180 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886d280 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886d380 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886d480 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886d580 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886d680 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886d780 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886d880 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886d980 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886da80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886db80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886de80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886df80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886e080 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886e180 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886e280 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886e380 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886e480 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886e580 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886e680 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886e780 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886e880 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886e980 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886f080 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886f180 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886f280 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886f380 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886f480 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886f580 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886f680 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886f780 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886f880 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886f980 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:27:15.343 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:27:15.343 list of memzone associated elements. size: 607.930908 MiB 00:27:15.343 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:27:15.343 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:27:15.343 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:27:15.343 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:27:15.343 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:27:15.343 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59340_0 00:27:15.343 element at address: 0x200000dff340 with size: 48.003113 MiB 00:27:15.343 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59340_0 00:27:15.343 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:27:15.343 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59340_0 00:27:15.343 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:27:15.343 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:27:15.343 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:27:15.343 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:27:15.343 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:27:15.344 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59340_0 00:27:15.344 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:27:15.344 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59340 00:27:15.344 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:27:15.344 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59340 00:27:15.344 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:27:15.344 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:27:15.344 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:27:15.344 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:27:15.344 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:27:15.344 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:27:15.344 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:27:15.344 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:27:15.344 element at address: 0x200000cff100 with size: 1.000549 MiB 00:27:15.344 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59340 00:27:15.344 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:27:15.344 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59340 00:27:15.344 element at address: 0x200019affd40 with size: 1.000549 MiB 00:27:15.344 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59340 00:27:15.344 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:27:15.344 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59340 00:27:15.344 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:27:15.344 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59340 00:27:15.344 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:27:15.344 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59340 00:27:15.344 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:27:15.344 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:27:15.344 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:27:15.344 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:27:15.344 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:27:15.344 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:27:15.344 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:27:15.344 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59340 00:27:15.344 element at address: 0x20000085df80 with size: 0.125549 MiB 00:27:15.344 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59340 00:27:15.344 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:27:15.344 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:27:15.344 element at address: 0x200028864140 with size: 0.023804 MiB 00:27:15.344 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:27:15.344 element at address: 0x200000859d40 with size: 0.016174 MiB 00:27:15.344 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59340 00:27:15.344 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:27:15.344 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:27:15.344 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:27:15.344 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59340 00:27:15.344 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:27:15.344 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59340 00:27:15.344 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:27:15.344 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59340 00:27:15.344 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:27:15.344 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:27:15.344 13:22:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:27:15.344 13:22:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59340 00:27:15.344 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59340 ']' 00:27:15.344 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59340 00:27:15.344 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:27:15.344 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.344 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59340 00:27:15.344 killing process with pid 59340 00:27:15.344 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:15.344 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:15.344 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59340' 00:27:15.344 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59340 00:27:15.344 13:22:08 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59340 00:27:18.633 00:27:18.633 real 0m4.959s 00:27:18.633 user 0m4.703s 00:27:18.633 sys 0m0.856s 00:27:18.633 13:22:11 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.633 ************************************ 00:27:18.633 13:22:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:27:18.633 END TEST dpdk_mem_utility 00:27:18.633 ************************************ 00:27:18.633 13:22:11 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:27:18.633 13:22:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:18.633 13:22:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.633 13:22:11 -- common/autotest_common.sh@10 -- # set +x 00:27:18.633 ************************************ 00:27:18.633 START TEST event 00:27:18.633 ************************************ 00:27:18.633 13:22:11 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:27:18.633 * Looking for test storage... 00:27:18.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:27:18.633 13:22:11 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:18.633 13:22:11 event -- common/autotest_common.sh@1711 -- # lcov --version 00:27:18.633 13:22:11 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:18.633 13:22:11 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:18.633 13:22:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.633 13:22:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.633 13:22:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.633 13:22:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.633 13:22:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.633 13:22:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.633 13:22:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.633 13:22:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.633 13:22:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.633 13:22:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.633 13:22:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.633 13:22:11 event -- scripts/common.sh@344 -- # case "$op" in 00:27:18.633 13:22:11 event -- scripts/common.sh@345 -- # : 1 00:27:18.633 13:22:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.633 13:22:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.633 13:22:11 event -- scripts/common.sh@365 -- # decimal 1 00:27:18.633 13:22:11 event -- scripts/common.sh@353 -- # local d=1 00:27:18.633 13:22:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.633 13:22:11 event -- scripts/common.sh@355 -- # echo 1 00:27:18.633 13:22:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.633 13:22:11 event -- scripts/common.sh@366 -- # decimal 2 00:27:18.633 13:22:11 event -- scripts/common.sh@353 -- # local d=2 00:27:18.633 13:22:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.633 13:22:11 event -- scripts/common.sh@355 -- # echo 2 00:27:18.633 13:22:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.633 13:22:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.633 13:22:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.634 13:22:11 event -- scripts/common.sh@368 -- # return 0 00:27:18.634 13:22:11 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.634 13:22:11 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:18.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.634 --rc genhtml_branch_coverage=1 00:27:18.634 --rc genhtml_function_coverage=1 00:27:18.634 --rc genhtml_legend=1 00:27:18.634 --rc geninfo_all_blocks=1 00:27:18.634 --rc geninfo_unexecuted_blocks=1 00:27:18.634 00:27:18.634 ' 00:27:18.634 13:22:11 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:18.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.634 --rc genhtml_branch_coverage=1 00:27:18.634 --rc genhtml_function_coverage=1 00:27:18.634 --rc genhtml_legend=1 00:27:18.634 --rc geninfo_all_blocks=1 00:27:18.634 --rc geninfo_unexecuted_blocks=1 00:27:18.634 00:27:18.634 ' 00:27:18.634 13:22:11 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:18.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.634 --rc genhtml_branch_coverage=1 00:27:18.634 --rc genhtml_function_coverage=1 00:27:18.634 --rc genhtml_legend=1 00:27:18.634 --rc geninfo_all_blocks=1 00:27:18.634 --rc geninfo_unexecuted_blocks=1 00:27:18.634 00:27:18.634 ' 00:27:18.634 13:22:11 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:18.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.634 --rc genhtml_branch_coverage=1 00:27:18.634 --rc genhtml_function_coverage=1 00:27:18.634 --rc genhtml_legend=1 00:27:18.634 --rc geninfo_all_blocks=1 00:27:18.634 --rc geninfo_unexecuted_blocks=1 00:27:18.634 00:27:18.634 ' 00:27:18.634 13:22:11 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:18.634 13:22:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:27:18.634 13:22:11 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:27:18.634 13:22:11 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:27:18.634 13:22:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.634 13:22:11 event -- common/autotest_common.sh@10 -- # set +x 00:27:18.634 ************************************ 00:27:18.634 START TEST event_perf 00:27:18.634 ************************************ 00:27:18.634 13:22:11 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:27:18.634 Running I/O for 1 seconds...[2024-12-06 13:22:11.574289] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:27:18.634 [2024-12-06 13:22:11.574654] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59457 ] 00:27:18.893 [2024-12-06 13:22:11.773263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.893 [2024-12-06 13:22:11.936681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.893 [2024-12-06 13:22:11.936821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.893 [2024-12-06 13:22:11.936893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.893 Running I/O for 1 seconds...[2024-12-06 13:22:11.936923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:20.268 00:27:20.268 lcore 0: 142318 00:27:20.268 lcore 1: 142317 00:27:20.268 lcore 2: 142318 00:27:20.268 lcore 3: 142319 00:27:20.268 done. 00:27:20.268 00:27:20.268 ************************************ 00:27:20.268 END TEST event_perf 00:27:20.268 ************************************ 00:27:20.268 real 0m1.702s 00:27:20.269 user 0m4.410s 00:27:20.269 sys 0m0.168s 00:27:20.269 13:22:13 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.269 13:22:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:27:20.269 13:22:13 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:27:20.269 13:22:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:20.269 13:22:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.269 13:22:13 event -- common/autotest_common.sh@10 -- # set +x 00:27:20.269 ************************************ 00:27:20.269 START TEST event_reactor 00:27:20.269 ************************************ 00:27:20.269 13:22:13 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:27:20.269 [2024-12-06 13:22:13.334870] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:27:20.269 [2024-12-06 13:22:13.335201] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59502 ] 00:27:20.527 [2024-12-06 13:22:13.518054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.802 [2024-12-06 13:22:13.674388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.193 test_start 00:27:22.193 oneshot 00:27:22.193 tick 100 00:27:22.193 tick 100 00:27:22.193 tick 250 00:27:22.193 tick 100 00:27:22.193 tick 100 00:27:22.193 tick 100 00:27:22.193 tick 250 00:27:22.193 tick 500 00:27:22.193 tick 100 00:27:22.193 tick 100 00:27:22.193 tick 250 00:27:22.193 tick 100 00:27:22.193 tick 100 00:27:22.193 test_end 00:27:22.193 00:27:22.193 real 0m1.660s 00:27:22.193 user 0m1.423s 00:27:22.193 sys 0m0.127s 00:27:22.193 ************************************ 00:27:22.193 END TEST event_reactor 00:27:22.193 ************************************ 00:27:22.193 13:22:14 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.193 13:22:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:27:22.193 13:22:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:27:22.193 13:22:14 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:22.193 13:22:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:22.193 13:22:14 event -- common/autotest_common.sh@10 -- # set +x 00:27:22.193 ************************************ 00:27:22.193 START TEST event_reactor_perf 00:27:22.193 ************************************ 00:27:22.193 13:22:15 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:27:22.193 [2024-12-06 13:22:15.062506] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:27:22.193 [2024-12-06 13:22:15.062689] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59539 ] 00:27:22.193 [2024-12-06 13:22:15.262308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.450 [2024-12-06 13:22:15.417584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.822 test_start 00:27:23.822 test_end 00:27:23.822 Performance: 320430 events per second 00:27:23.822 00:27:23.822 real 0m1.690s 00:27:23.822 user 0m1.442s 00:27:23.822 sys 0m0.138s 00:27:23.822 13:22:16 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.822 13:22:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:27:23.822 ************************************ 00:27:23.822 END TEST event_reactor_perf 00:27:23.822 ************************************ 00:27:23.822 13:22:16 event -- event/event.sh@49 -- # uname -s 00:27:23.822 13:22:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:27:23.822 13:22:16 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:27:23.822 13:22:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:23.822 13:22:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.822 13:22:16 event -- common/autotest_common.sh@10 -- # set +x 00:27:23.822 ************************************ 00:27:23.822 START TEST event_scheduler 00:27:23.822 ************************************ 00:27:23.822 13:22:16 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:27:23.822 * Looking for test storage... 00:27:23.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:27:23.822 13:22:16 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:23.822 13:22:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:27:23.822 13:22:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:24.080 13:22:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.080 13:22:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:27:24.080 13:22:16 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.080 13:22:16 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:24.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.080 --rc genhtml_branch_coverage=1 00:27:24.080 --rc genhtml_function_coverage=1 00:27:24.080 --rc genhtml_legend=1 00:27:24.080 --rc geninfo_all_blocks=1 00:27:24.080 --rc geninfo_unexecuted_blocks=1 00:27:24.080 00:27:24.080 ' 00:27:24.080 13:22:16 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:24.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.080 --rc genhtml_branch_coverage=1 00:27:24.080 --rc genhtml_function_coverage=1 00:27:24.080 --rc genhtml_legend=1 00:27:24.080 --rc geninfo_all_blocks=1 00:27:24.080 --rc geninfo_unexecuted_blocks=1 00:27:24.080 00:27:24.080 ' 00:27:24.080 13:22:16 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:24.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.080 --rc genhtml_branch_coverage=1 00:27:24.080 --rc genhtml_function_coverage=1 00:27:24.080 --rc genhtml_legend=1 00:27:24.080 --rc geninfo_all_blocks=1 00:27:24.080 --rc geninfo_unexecuted_blocks=1 00:27:24.080 00:27:24.080 ' 00:27:24.080 13:22:16 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:24.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.080 --rc genhtml_branch_coverage=1 00:27:24.080 --rc genhtml_function_coverage=1 00:27:24.080 --rc genhtml_legend=1 00:27:24.080 --rc geninfo_all_blocks=1 00:27:24.080 --rc geninfo_unexecuted_blocks=1 00:27:24.080 00:27:24.080 ' 00:27:24.080 13:22:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:27:24.080 13:22:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59615 00:27:24.080 13:22:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:27:24.080 13:22:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:27:24.080 13:22:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59615 00:27:24.080 13:22:16 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59615 ']' 00:27:24.080 13:22:16 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.080 13:22:16 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:24.080 13:22:16 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.081 13:22:16 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:24.081 13:22:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:24.081 [2024-12-06 13:22:17.161347] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:27:24.081 [2024-12-06 13:22:17.161598] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59615 ] 00:27:24.338 [2024-12-06 13:22:17.372851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.597 [2024-12-06 13:22:17.556087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.597 [2024-12-06 13:22:17.556197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.597 [2024-12-06 13:22:17.556271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.597 [2024-12-06 13:22:17.556284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.163 13:22:18 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.163 13:22:18 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:27:25.163 13:22:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:27:25.163 13:22:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.163 13:22:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:25.163 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:25.163 POWER: Cannot set governor of lcore 0 to userspace 00:27:25.163 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:25.163 POWER: Cannot set governor of lcore 0 to performance 00:27:25.163 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:25.163 POWER: Cannot set governor of lcore 0 to userspace 00:27:25.163 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:25.163 POWER: Cannot set governor of lcore 0 to userspace 00:27:25.163 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:27:25.163 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:27:25.163 POWER: Unable to set Power Management Environment for lcore 0 00:27:25.163 [2024-12-06 13:22:18.171155] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:27:25.163 [2024-12-06 13:22:18.171190] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:27:25.163 [2024-12-06 13:22:18.171206] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:27:25.163 [2024-12-06 13:22:18.171261] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:27:25.163 [2024-12-06 13:22:18.171275] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:27:25.163 [2024-12-06 13:22:18.171292] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:27:25.163 13:22:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.163 13:22:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:27:25.163 13:22:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.163 13:22:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:25.730 [2024-12-06 13:22:18.627517] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:27:25.730 13:22:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.730 13:22:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:27:25.730 13:22:18 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:25.730 13:22:18 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:25.730 13:22:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:25.730 ************************************ 00:27:25.730 START TEST scheduler_create_thread 00:27:25.730 ************************************ 00:27:25.730 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:25.731 2 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:25.731 3 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:25.731 4 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:25.731 5 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:25.731 6 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:25.731 7 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:25.731 8 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:25.731 9 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:25.731 10 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.731 13:22:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:26.298 13:22:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.298 13:22:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:27:26.298 13:22:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.298 13:22:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:27.674 13:22:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.674 13:22:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:27:27.674 13:22:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:27:27.674 13:22:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.674 13:22:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:29.049 13:22:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.049 00:27:29.049 real 0m3.105s 00:27:29.049 user 0m0.020s 00:27:29.049 sys 0m0.010s 00:27:29.049 13:22:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.049 13:22:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:29.049 ************************************ 00:27:29.049 END TEST scheduler_create_thread 00:27:29.049 ************************************ 00:27:29.049 13:22:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:29.049 13:22:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59615 00:27:29.049 13:22:21 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59615 ']' 00:27:29.049 13:22:21 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59615 00:27:29.049 13:22:21 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:27:29.049 13:22:21 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.049 13:22:21 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59615 00:27:29.049 13:22:21 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:29.049 13:22:21 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:29.049 killing process with pid 59615 00:27:29.049 13:22:21 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59615' 00:27:29.049 13:22:21 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59615 00:27:29.049 13:22:21 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59615 00:27:29.307 [2024-12-06 13:22:22.226265] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:27:30.682 00:27:30.682 real 0m6.979s 00:27:30.682 user 0m14.061s 00:27:30.682 sys 0m0.691s 00:27:30.682 13:22:23 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.682 13:22:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:30.682 ************************************ 00:27:30.682 END TEST event_scheduler 00:27:30.682 ************************************ 00:27:30.940 13:22:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:27:30.940 13:22:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:27:30.940 13:22:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:30.940 13:22:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.940 13:22:23 event -- common/autotest_common.sh@10 -- # set +x 00:27:30.940 ************************************ 00:27:30.940 START TEST app_repeat 00:27:30.940 ************************************ 00:27:30.940 13:22:23 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59740 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:27:30.940 Process app_repeat pid: 59740 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59740' 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:27:30.940 spdk_app_start Round 0 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:27:30.940 13:22:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59740 /var/tmp/spdk-nbd.sock 00:27:30.940 13:22:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59740 ']' 00:27:30.940 13:22:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:30.940 13:22:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:30.940 13:22:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:30.940 13:22:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.940 13:22:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:30.940 [2024-12-06 13:22:23.884826] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:27:30.940 [2024-12-06 13:22:23.884998] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59740 ] 00:27:31.199 [2024-12-06 13:22:24.072937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:31.199 [2024-12-06 13:22:24.207260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.199 [2024-12-06 13:22:24.207313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.126 13:22:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.126 13:22:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:27:32.126 13:22:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:32.382 Malloc0 00:27:32.382 13:22:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:32.639 Malloc1 00:27:32.639 13:22:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:32.639 13:22:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:27:32.897 /dev/nbd0 00:27:32.897 13:22:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:32.897 13:22:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:32.897 1+0 records in 00:27:32.897 1+0 records out 00:27:32.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300758 s, 13.6 MB/s 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:32.897 13:22:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:32.897 13:22:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:32.897 13:22:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:32.897 13:22:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:27:33.461 /dev/nbd1 00:27:33.461 13:22:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:33.461 13:22:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:33.461 1+0 records in 00:27:33.461 1+0 records out 00:27:33.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476567 s, 8.6 MB/s 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:33.461 13:22:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:33.461 13:22:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:33.461 13:22:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:33.461 13:22:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:33.461 13:22:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:33.462 13:22:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:33.719 { 00:27:33.719 "nbd_device": "/dev/nbd0", 00:27:33.719 "bdev_name": "Malloc0" 00:27:33.719 }, 00:27:33.719 { 00:27:33.719 "nbd_device": "/dev/nbd1", 00:27:33.719 "bdev_name": "Malloc1" 00:27:33.719 } 00:27:33.719 ]' 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:33.719 { 00:27:33.719 "nbd_device": "/dev/nbd0", 00:27:33.719 "bdev_name": "Malloc0" 00:27:33.719 }, 00:27:33.719 { 00:27:33.719 "nbd_device": "/dev/nbd1", 00:27:33.719 "bdev_name": "Malloc1" 00:27:33.719 } 00:27:33.719 ]' 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:33.719 /dev/nbd1' 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:33.719 /dev/nbd1' 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:27:33.719 256+0 records in 00:27:33.719 256+0 records out 00:27:33.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00754005 s, 139 MB/s 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:33.719 256+0 records in 00:27:33.719 256+0 records out 00:27:33.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0345746 s, 30.3 MB/s 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:33.719 13:22:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:33.976 256+0 records in 00:27:33.976 256+0 records out 00:27:33.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0429356 s, 24.4 MB/s 00:27:33.976 13:22:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:33.976 13:22:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:33.976 13:22:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:33.976 13:22:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:33.977 13:22:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:34.234 13:22:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:34.234 13:22:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:34.234 13:22:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:34.234 13:22:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:34.234 13:22:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:34.234 13:22:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:34.234 13:22:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:34.234 13:22:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:34.234 13:22:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:34.234 13:22:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:34.492 13:22:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:34.492 13:22:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:34.492 13:22:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:34.492 13:22:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:34.492 13:22:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:34.492 13:22:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:34.492 13:22:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:34.492 13:22:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:34.492 13:22:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:34.492 13:22:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:34.492 13:22:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:34.750 13:22:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:27:34.750 13:22:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:27:35.317 13:22:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:27:37.218 [2024-12-06 13:22:29.819734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:37.218 [2024-12-06 13:22:29.982989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.218 [2024-12-06 13:22:29.983002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.218 [2024-12-06 13:22:30.258010] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:27:37.218 [2024-12-06 13:22:30.258205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:27:38.593 13:22:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:27:38.593 spdk_app_start Round 1 00:27:38.593 13:22:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:27:38.593 13:22:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59740 /var/tmp/spdk-nbd.sock 00:27:38.593 13:22:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59740 ']' 00:27:38.593 13:22:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:38.593 13:22:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:38.593 13:22:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:38.593 13:22:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.593 13:22:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:38.593 13:22:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.593 13:22:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:27:38.593 13:22:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:39.160 Malloc0 00:27:39.160 13:22:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:39.418 Malloc1 00:27:39.418 13:22:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:39.418 13:22:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:27:39.676 /dev/nbd0 00:27:39.676 13:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:39.676 13:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:39.676 1+0 records in 00:27:39.676 1+0 records out 00:27:39.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383794 s, 10.7 MB/s 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:39.676 13:22:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:39.676 13:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:39.676 13:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:39.676 13:22:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:27:39.934 /dev/nbd1 00:27:39.934 13:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:39.934 13:22:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:39.934 1+0 records in 00:27:39.934 1+0 records out 00:27:39.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464245 s, 8.8 MB/s 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:39.934 13:22:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:39.934 13:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:39.934 13:22:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:39.934 13:22:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:39.934 13:22:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:39.934 13:22:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:40.233 { 00:27:40.233 "nbd_device": "/dev/nbd0", 00:27:40.233 "bdev_name": "Malloc0" 00:27:40.233 }, 00:27:40.233 { 00:27:40.233 "nbd_device": "/dev/nbd1", 00:27:40.233 "bdev_name": "Malloc1" 00:27:40.233 } 00:27:40.233 ]' 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:40.233 { 00:27:40.233 "nbd_device": "/dev/nbd0", 00:27:40.233 "bdev_name": "Malloc0" 00:27:40.233 }, 00:27:40.233 { 00:27:40.233 "nbd_device": "/dev/nbd1", 00:27:40.233 "bdev_name": "Malloc1" 00:27:40.233 } 00:27:40.233 ]' 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:40.233 /dev/nbd1' 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:40.233 /dev/nbd1' 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:27:40.233 256+0 records in 00:27:40.233 256+0 records out 00:27:40.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114079 s, 91.9 MB/s 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:40.233 256+0 records in 00:27:40.233 256+0 records out 00:27:40.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283503 s, 37.0 MB/s 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:40.233 256+0 records in 00:27:40.233 256+0 records out 00:27:40.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0387812 s, 27.0 MB/s 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:40.233 13:22:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:27:40.491 13:22:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:40.491 13:22:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:27:40.491 13:22:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:40.491 13:22:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:40.491 13:22:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:40.491 13:22:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:40.491 13:22:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:40.491 13:22:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:27:40.491 13:22:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:40.491 13:22:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:40.749 13:22:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:40.749 13:22:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:40.749 13:22:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:40.749 13:22:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:40.749 13:22:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:40.749 13:22:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:40.749 13:22:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:40.749 13:22:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:40.749 13:22:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:40.749 13:22:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:41.008 13:22:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:41.008 13:22:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:41.008 13:22:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:41.008 13:22:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:41.008 13:22:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:41.008 13:22:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:41.008 13:22:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:41.008 13:22:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:41.008 13:22:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:41.008 13:22:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:41.008 13:22:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:41.266 13:22:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:41.266 13:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:41.266 13:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:41.525 13:22:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:41.525 13:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:41.525 13:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:27:41.525 13:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:27:41.525 13:22:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:27:41.525 13:22:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:27:41.525 13:22:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:27:41.525 13:22:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:41.525 13:22:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:27:41.525 13:22:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:27:42.090 13:22:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:27:43.461 [2024-12-06 13:22:36.419552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:43.718 [2024-12-06 13:22:36.586679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.718 [2024-12-06 13:22:36.586682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.976 [2024-12-06 13:22:36.861261] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:27:43.976 [2024-12-06 13:22:36.861445] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:27:44.936 spdk_app_start Round 2 00:27:44.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:44.936 13:22:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:27:44.936 13:22:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:27:44.937 13:22:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59740 /var/tmp/spdk-nbd.sock 00:27:44.937 13:22:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59740 ']' 00:27:44.937 13:22:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:44.937 13:22:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.937 13:22:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:44.937 13:22:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.937 13:22:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:45.194 13:22:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.194 13:22:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:27:45.194 13:22:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:45.453 Malloc0 00:27:45.453 13:22:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:46.019 Malloc1 00:27:46.019 13:22:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:46.019 13:22:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:27:46.278 /dev/nbd0 00:27:46.278 13:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:46.278 13:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:46.278 1+0 records in 00:27:46.278 1+0 records out 00:27:46.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408914 s, 10.0 MB/s 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:46.278 13:22:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:46.278 13:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:46.278 13:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:46.278 13:22:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:27:46.538 /dev/nbd1 00:27:46.538 13:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:46.538 13:22:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:46.538 1+0 records in 00:27:46.538 1+0 records out 00:27:46.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420062 s, 9.8 MB/s 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:46.538 13:22:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:27:46.538 13:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:46.538 13:22:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:46.538 13:22:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:46.538 13:22:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:46.538 13:22:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:46.797 13:22:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:46.797 { 00:27:46.797 "nbd_device": "/dev/nbd0", 00:27:46.797 "bdev_name": "Malloc0" 00:27:46.797 }, 00:27:46.797 { 00:27:46.797 "nbd_device": "/dev/nbd1", 00:27:46.797 "bdev_name": "Malloc1" 00:27:46.797 } 00:27:46.797 ]' 00:27:46.797 13:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:46.797 { 00:27:46.797 "nbd_device": "/dev/nbd0", 00:27:46.797 "bdev_name": "Malloc0" 00:27:46.797 }, 00:27:46.797 { 00:27:46.797 "nbd_device": "/dev/nbd1", 00:27:46.797 "bdev_name": "Malloc1" 00:27:46.797 } 00:27:46.797 ]' 00:27:46.797 13:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:47.057 /dev/nbd1' 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:47.057 /dev/nbd1' 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:27:47.057 256+0 records in 00:27:47.057 256+0 records out 00:27:47.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00850908 s, 123 MB/s 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:47.057 256+0 records in 00:27:47.057 256+0 records out 00:27:47.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337343 s, 31.1 MB/s 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:47.057 13:22:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:47.057 256+0 records in 00:27:47.057 256+0 records out 00:27:47.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0369638 s, 28.4 MB/s 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:47.057 13:22:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:47.316 13:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:47.316 13:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:47.316 13:22:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:47.316 13:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:47.316 13:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:47.316 13:22:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:47.316 13:22:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:47.316 13:22:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:47.316 13:22:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:47.316 13:22:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:47.575 13:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:47.834 13:22:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:48.092 13:22:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:48.092 13:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:27:48.092 13:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:48.092 13:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:27:48.092 13:22:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:27:48.092 13:22:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:27:48.092 13:22:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:27:48.092 13:22:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:48.092 13:22:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:27:48.092 13:22:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:27:48.350 13:22:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:27:50.254 [2024-12-06 13:22:42.941657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:50.254 [2024-12-06 13:22:43.103418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.254 [2024-12-06 13:22:43.103429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.511 [2024-12-06 13:22:43.364271] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:27:50.511 [2024-12-06 13:22:43.364428] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:27:51.445 13:22:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59740 /var/tmp/spdk-nbd.sock 00:27:51.445 13:22:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59740 ']' 00:27:51.445 13:22:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:51.445 13:22:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.445 13:22:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:51.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:51.445 13:22:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.445 13:22:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:51.702 13:22:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.702 13:22:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:27:51.702 13:22:44 event.app_repeat -- event/event.sh@39 -- # killprocess 59740 00:27:51.702 13:22:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59740 ']' 00:27:51.702 13:22:44 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59740 00:27:51.702 13:22:44 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:27:51.702 13:22:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:51.702 13:22:44 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59740 00:27:51.702 killing process with pid 59740 00:27:51.702 13:22:44 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:51.703 13:22:44 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:51.703 13:22:44 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59740' 00:27:51.703 13:22:44 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59740 00:27:51.703 13:22:44 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59740 00:27:53.074 spdk_app_start is called in Round 0. 00:27:53.074 Shutdown signal received, stop current app iteration 00:27:53.074 Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 reinitialization... 00:27:53.074 spdk_app_start is called in Round 1. 00:27:53.074 Shutdown signal received, stop current app iteration 00:27:53.074 Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 reinitialization... 00:27:53.074 spdk_app_start is called in Round 2. 00:27:53.074 Shutdown signal received, stop current app iteration 00:27:53.074 Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 reinitialization... 00:27:53.074 spdk_app_start is called in Round 3. 00:27:53.074 Shutdown signal received, stop current app iteration 00:27:53.074 13:22:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:27:53.074 13:22:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:27:53.074 ************************************ 00:27:53.074 END TEST app_repeat 00:27:53.074 ************************************ 00:27:53.074 00:27:53.074 real 0m22.303s 00:27:53.074 user 0m47.796s 00:27:53.074 sys 0m4.047s 00:27:53.074 13:22:46 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:53.075 13:22:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:53.075 13:22:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:27:53.075 13:22:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:27:53.075 13:22:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:53.075 13:22:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:53.075 13:22:46 event -- common/autotest_common.sh@10 -- # set +x 00:27:53.075 ************************************ 00:27:53.075 START TEST cpu_locks 00:27:53.075 ************************************ 00:27:53.075 13:22:46 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:27:53.333 * Looking for test storage... 00:27:53.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:27:53.333 13:22:46 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:53.334 13:22:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:27:53.334 13:22:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:53.334 13:22:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:53.334 13:22:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:27:53.334 13:22:46 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:53.334 13:22:46 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:53.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.334 --rc genhtml_branch_coverage=1 00:27:53.334 --rc genhtml_function_coverage=1 00:27:53.334 --rc genhtml_legend=1 00:27:53.334 --rc geninfo_all_blocks=1 00:27:53.334 --rc geninfo_unexecuted_blocks=1 00:27:53.334 00:27:53.334 ' 00:27:53.334 13:22:46 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:53.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.334 --rc genhtml_branch_coverage=1 00:27:53.334 --rc genhtml_function_coverage=1 00:27:53.334 --rc genhtml_legend=1 00:27:53.334 --rc geninfo_all_blocks=1 00:27:53.334 --rc geninfo_unexecuted_blocks=1 00:27:53.334 00:27:53.334 ' 00:27:53.334 13:22:46 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:53.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.334 --rc genhtml_branch_coverage=1 00:27:53.334 --rc genhtml_function_coverage=1 00:27:53.334 --rc genhtml_legend=1 00:27:53.334 --rc geninfo_all_blocks=1 00:27:53.334 --rc geninfo_unexecuted_blocks=1 00:27:53.334 00:27:53.334 ' 00:27:53.334 13:22:46 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:53.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.334 --rc genhtml_branch_coverage=1 00:27:53.334 --rc genhtml_function_coverage=1 00:27:53.334 --rc genhtml_legend=1 00:27:53.334 --rc geninfo_all_blocks=1 00:27:53.334 --rc geninfo_unexecuted_blocks=1 00:27:53.334 00:27:53.334 ' 00:27:53.334 13:22:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:27:53.334 13:22:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:27:53.334 13:22:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:27:53.334 13:22:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:27:53.334 13:22:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:53.334 13:22:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:53.334 13:22:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:53.334 ************************************ 00:27:53.334 START TEST default_locks 00:27:53.334 ************************************ 00:27:53.334 13:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:27:53.334 13:22:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:53.334 13:22:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60217 00:27:53.334 13:22:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60217 00:27:53.334 13:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60217 ']' 00:27:53.334 13:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.334 13:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.334 13:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.334 13:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.334 13:22:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:53.613 [2024-12-06 13:22:46.553029] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:27:53.613 [2024-12-06 13:22:46.553219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60217 ] 00:27:53.896 [2024-12-06 13:22:46.763351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.896 [2024-12-06 13:22:46.973522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.306 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.306 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:27:55.306 13:22:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60217 00:27:55.306 13:22:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60217 00:27:55.306 13:22:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:55.873 13:22:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60217 00:27:55.873 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60217 ']' 00:27:55.873 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60217 00:27:55.873 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:27:55.873 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.873 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60217 00:27:55.873 killing process with pid 60217 00:27:55.873 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:55.873 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:55.873 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60217' 00:27:55.873 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60217 00:27:55.873 13:22:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60217 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60217 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60217 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60217 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60217 ']' 00:27:59.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.229 ERROR: process (pid: 60217) is no longer running 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:59.229 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60217) - No such process 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:27:59.229 00:27:59.229 real 0m5.441s 00:27:59.229 user 0m5.209s 00:27:59.229 sys 0m1.028s 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.229 13:22:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:59.229 ************************************ 00:27:59.229 END TEST default_locks 00:27:59.229 ************************************ 00:27:59.230 13:22:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:27:59.230 13:22:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:59.230 13:22:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:59.230 13:22:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:59.230 ************************************ 00:27:59.230 START TEST default_locks_via_rpc 00:27:59.230 ************************************ 00:27:59.230 13:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:27:59.230 13:22:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60308 00:27:59.230 13:22:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:59.230 13:22:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60308 00:27:59.230 13:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60308 ']' 00:27:59.230 13:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.230 13:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.230 13:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.230 13:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.230 13:22:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:59.230 [2024-12-06 13:22:52.066247] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:27:59.230 [2024-12-06 13:22:52.066796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60308 ] 00:27:59.230 [2024-12-06 13:22:52.277587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.488 [2024-12-06 13:22:52.479661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60308 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60308 00:28:00.863 13:22:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:01.121 13:22:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60308 00:28:01.121 13:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60308 ']' 00:28:01.121 13:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60308 00:28:01.121 13:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:28:01.121 13:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.121 13:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60308 00:28:01.121 13:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:01.121 13:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:01.121 13:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60308' 00:28:01.121 killing process with pid 60308 00:28:01.121 13:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60308 00:28:01.121 13:22:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60308 00:28:04.586 ************************************ 00:28:04.586 END TEST default_locks_via_rpc 00:28:04.586 ************************************ 00:28:04.586 00:28:04.586 real 0m5.359s 00:28:04.586 user 0m5.133s 00:28:04.586 sys 0m0.968s 00:28:04.586 13:22:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.586 13:22:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:04.586 13:22:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:28:04.586 13:22:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:04.586 13:22:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.586 13:22:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:04.586 ************************************ 00:28:04.586 START TEST non_locking_app_on_locked_coremask 00:28:04.586 ************************************ 00:28:04.586 13:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:28:04.586 13:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60399 00:28:04.586 13:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:04.586 13:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60399 /var/tmp/spdk.sock 00:28:04.586 13:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60399 ']' 00:28:04.586 13:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.586 13:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.586 13:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.586 13:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.586 13:22:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:04.586 [2024-12-06 13:22:57.450267] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:28:04.586 [2024-12-06 13:22:57.450504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60399 ] 00:28:04.873 [2024-12-06 13:22:57.640286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.873 [2024-12-06 13:22:57.806048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:06.246 13:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.246 13:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:06.246 13:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60415 00:28:06.246 13:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60415 /var/tmp/spdk2.sock 00:28:06.246 13:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60415 ']' 00:28:06.246 13:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:06.246 13:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.246 13:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:06.246 13:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.246 13:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:28:06.246 13:22:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:06.246 [2024-12-06 13:22:59.167051] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:28:06.246 [2024-12-06 13:22:59.168212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60415 ] 00:28:06.503 [2024-12-06 13:22:59.395749] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:06.503 [2024-12-06 13:22:59.395856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.761 [2024-12-06 13:22:59.718303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.288 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.288 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:09.288 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60399 00:28:09.288 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:09.288 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60399 00:28:09.856 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60399 00:28:09.856 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60399 ']' 00:28:09.856 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60399 00:28:09.856 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:09.856 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.115 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60399 00:28:10.115 killing process with pid 60399 00:28:10.115 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:10.115 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:10.115 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60399' 00:28:10.115 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60399 00:28:10.115 13:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60399 00:28:16.678 13:23:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60415 00:28:16.678 13:23:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60415 ']' 00:28:16.678 13:23:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60415 00:28:16.678 13:23:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:16.678 13:23:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.678 13:23:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60415 00:28:16.678 killing process with pid 60415 00:28:16.678 13:23:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:16.678 13:23:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:16.678 13:23:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60415' 00:28:16.678 13:23:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60415 00:28:16.678 13:23:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60415 00:28:19.210 ************************************ 00:28:19.210 END TEST non_locking_app_on_locked_coremask 00:28:19.210 ************************************ 00:28:19.210 00:28:19.210 real 0m14.858s 00:28:19.210 user 0m15.225s 00:28:19.210 sys 0m2.041s 00:28:19.210 13:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.210 13:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:19.210 13:23:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:28:19.210 13:23:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:19.210 13:23:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.210 13:23:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:19.210 ************************************ 00:28:19.210 START TEST locking_app_on_unlocked_coremask 00:28:19.210 ************************************ 00:28:19.210 13:23:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:28:19.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.210 13:23:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60596 00:28:19.210 13:23:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60596 /var/tmp/spdk.sock 00:28:19.210 13:23:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60596 ']' 00:28:19.210 13:23:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:28:19.210 13:23:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.210 13:23:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.210 13:23:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.210 13:23:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.210 13:23:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:19.469 [2024-12-06 13:23:12.395147] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:28:19.470 [2024-12-06 13:23:12.395388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60596 ] 00:28:19.729 [2024-12-06 13:23:12.605421] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:19.729 [2024-12-06 13:23:12.605572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.729 [2024-12-06 13:23:12.805901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:21.106 13:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.106 13:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:21.106 13:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60622 00:28:21.106 13:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:28:21.106 13:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60622 /var/tmp/spdk2.sock 00:28:21.106 13:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60622 ']' 00:28:21.106 13:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:21.106 13:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.106 13:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:21.106 13:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.106 13:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:21.106 [2024-12-06 13:23:14.192291] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:28:21.106 [2024-12-06 13:23:14.192853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60622 ] 00:28:21.466 [2024-12-06 13:23:14.412797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.727 [2024-12-06 13:23:14.760103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.257 13:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.257 13:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:24.257 13:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60622 00:28:24.257 13:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:24.257 13:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60622 00:28:25.234 13:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60596 00:28:25.234 13:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60596 ']' 00:28:25.234 13:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60596 00:28:25.234 13:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:25.234 13:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.234 13:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60596 00:28:25.234 killing process with pid 60596 00:28:25.234 13:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:25.234 13:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:25.234 13:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60596' 00:28:25.234 13:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60596 00:28:25.234 13:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60596 00:28:31.789 13:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60622 00:28:31.789 13:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60622 ']' 00:28:31.789 13:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60622 00:28:31.789 13:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:31.789 13:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.789 13:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60622 00:28:31.789 13:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:31.789 killing process with pid 60622 00:28:31.789 13:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:31.789 13:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60622' 00:28:31.789 13:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60622 00:28:31.789 13:23:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60622 00:28:35.285 ************************************ 00:28:35.285 END TEST locking_app_on_unlocked_coremask 00:28:35.285 ************************************ 00:28:35.285 00:28:35.285 real 0m15.521s 00:28:35.285 user 0m15.805s 00:28:35.285 sys 0m2.153s 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:35.285 13:23:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:28:35.285 13:23:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:35.285 13:23:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:35.285 13:23:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:35.285 ************************************ 00:28:35.285 START TEST locking_app_on_locked_coremask 00:28:35.285 ************************************ 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:28:35.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60799 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60799 /var/tmp/spdk.sock 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60799 ']' 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.285 13:23:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:35.285 [2024-12-06 13:23:27.953131] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:28:35.285 [2024-12-06 13:23:27.953533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60799 ] 00:28:35.285 [2024-12-06 13:23:28.138993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.285 [2024-12-06 13:23:28.305005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60826 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60826 /var/tmp/spdk2.sock 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60826 /var/tmp/spdk2.sock 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60826 /var/tmp/spdk2.sock 00:28:36.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60826 ']' 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.660 13:23:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:36.660 [2024-12-06 13:23:29.664425] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:28:36.660 [2024-12-06 13:23:29.664673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60826 ] 00:28:36.919 [2024-12-06 13:23:29.882140] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60799 has claimed it. 00:28:36.919 [2024-12-06 13:23:29.882273] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:37.485 ERROR: process (pid: 60826) is no longer running 00:28:37.485 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60826) - No such process 00:28:37.485 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.485 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:28:37.485 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:28:37.485 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.485 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.485 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.485 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60799 00:28:37.485 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60799 00:28:37.485 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:38.054 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60799 00:28:38.054 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60799 ']' 00:28:38.054 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60799 00:28:38.054 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:28:38.054 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:38.054 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60799 00:28:38.054 killing process with pid 60799 00:28:38.054 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:38.054 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:38.054 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60799' 00:28:38.054 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60799 00:28:38.054 13:23:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60799 00:28:41.336 ************************************ 00:28:41.336 END TEST locking_app_on_locked_coremask 00:28:41.336 ************************************ 00:28:41.337 00:28:41.337 real 0m6.276s 00:28:41.337 user 0m6.548s 00:28:41.337 sys 0m1.244s 00:28:41.337 13:23:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.337 13:23:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:41.337 13:23:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:28:41.337 13:23:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:41.337 13:23:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:41.337 13:23:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:41.337 ************************************ 00:28:41.337 START TEST locking_overlapped_coremask 00:28:41.337 ************************************ 00:28:41.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.337 13:23:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:28:41.337 13:23:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60901 00:28:41.337 13:23:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:28:41.337 13:23:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60901 /var/tmp/spdk.sock 00:28:41.337 13:23:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60901 ']' 00:28:41.337 13:23:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.337 13:23:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.337 13:23:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.337 13:23:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.337 13:23:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:41.337 [2024-12-06 13:23:34.311439] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:28:41.337 [2024-12-06 13:23:34.311616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60901 ] 00:28:41.594 [2024-12-06 13:23:34.498678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:41.594 [2024-12-06 13:23:34.678410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.594 [2024-12-06 13:23:34.678530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.594 [2024-12-06 13:23:34.678554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60930 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60930 /var/tmp/spdk2.sock 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60930 /var/tmp/spdk2.sock 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:28:42.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60930 /var/tmp/spdk2.sock 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60930 ']' 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.966 13:23:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:43.222 [2024-12-06 13:23:36.122633] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:28:43.222 [2024-12-06 13:23:36.122799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60930 ] 00:28:43.479 [2024-12-06 13:23:36.335173] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60901 has claimed it. 00:28:43.479 [2024-12-06 13:23:36.335295] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:43.736 ERROR: process (pid: 60930) is no longer running 00:28:43.736 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60930) - No such process 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60901 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60901 ']' 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60901 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.736 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60901 00:28:43.992 killing process with pid 60901 00:28:43.992 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:43.992 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:43.992 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60901' 00:28:43.992 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60901 00:28:43.992 13:23:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60901 00:28:47.276 00:28:47.276 real 0m5.765s 00:28:47.276 user 0m15.601s 00:28:47.276 sys 0m1.024s 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:47.276 ************************************ 00:28:47.276 END TEST locking_overlapped_coremask 00:28:47.276 ************************************ 00:28:47.276 13:23:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:28:47.276 13:23:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:47.276 13:23:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.276 13:23:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:47.276 ************************************ 00:28:47.276 START TEST locking_overlapped_coremask_via_rpc 00:28:47.276 ************************************ 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61005 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61005 /var/tmp/spdk.sock 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61005 ']' 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.276 13:23:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:47.276 [2024-12-06 13:23:40.107677] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:28:47.276 [2024-12-06 13:23:40.108164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61005 ] 00:28:47.276 [2024-12-06 13:23:40.293867] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:47.276 [2024-12-06 13:23:40.293941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:47.535 [2024-12-06 13:23:40.464063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.535 [2024-12-06 13:23:40.464177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.535 [2024-12-06 13:23:40.464192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.912 13:23:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.912 13:23:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:48.912 13:23:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61029 00:28:48.912 13:23:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:28:48.912 13:23:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61029 /var/tmp/spdk2.sock 00:28:48.912 13:23:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61029 ']' 00:28:48.912 13:23:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:48.912 13:23:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.912 13:23:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:48.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:48.912 13:23:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.912 13:23:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:48.912 [2024-12-06 13:23:41.814707] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:28:48.912 [2024-12-06 13:23:41.815202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61029 ] 00:28:49.170 [2024-12-06 13:23:42.031336] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:49.170 [2024-12-06 13:23:42.035457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:49.429 [2024-12-06 13:23:42.363243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.429 [2024-12-06 13:23:42.366495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.429 [2024-12-06 13:23:42.366505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:52.014 [2024-12-06 13:23:44.755705] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61005 has claimed it. 00:28:52.014 request: 00:28:52.014 { 00:28:52.014 "method": "framework_enable_cpumask_locks", 00:28:52.014 "req_id": 1 00:28:52.014 } 00:28:52.014 Got JSON-RPC error response 00:28:52.014 response: 00:28:52.014 { 00:28:52.014 "code": -32603, 00:28:52.014 "message": "Failed to claim CPU core: 2" 00:28:52.014 } 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61005 /var/tmp/spdk.sock 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61005 ']' 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.014 13:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:52.014 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.014 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:52.014 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61029 /var/tmp/spdk2.sock 00:28:52.014 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61029 ']' 00:28:52.014 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:52.014 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.014 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:52.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:52.014 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.014 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:52.578 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.578 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:28:52.578 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:28:52.578 ************************************ 00:28:52.578 END TEST locking_overlapped_coremask_via_rpc 00:28:52.578 ************************************ 00:28:52.578 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:28:52.578 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:28:52.579 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:28:52.579 00:28:52.579 real 0m5.424s 00:28:52.579 user 0m1.785s 00:28:52.579 sys 0m0.324s 00:28:52.579 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.579 13:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:52.579 13:23:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:28:52.579 13:23:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61005 ]] 00:28:52.579 13:23:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61005 00:28:52.579 13:23:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61005 ']' 00:28:52.579 13:23:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61005 00:28:52.579 13:23:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:28:52.579 13:23:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:52.579 13:23:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61005 00:28:52.579 13:23:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:52.579 killing process with pid 61005 00:28:52.579 13:23:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:52.579 13:23:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61005' 00:28:52.579 13:23:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61005 00:28:52.579 13:23:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61005 00:28:55.857 13:23:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61029 ]] 00:28:55.857 13:23:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61029 00:28:55.857 13:23:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61029 ']' 00:28:55.857 13:23:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61029 00:28:55.857 13:23:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:28:55.857 13:23:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.857 13:23:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61029 00:28:55.857 killing process with pid 61029 00:28:55.857 13:23:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:55.858 13:23:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:55.858 13:23:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61029' 00:28:55.858 13:23:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61029 00:28:55.858 13:23:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61029 00:28:59.145 13:23:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:59.145 13:23:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:28:59.145 13:23:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61005 ]] 00:28:59.145 13:23:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61005 00:28:59.145 13:23:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61005 ']' 00:28:59.145 13:23:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61005 00:28:59.145 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61005) - No such process 00:28:59.145 Process with pid 61005 is not found 00:28:59.145 13:23:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61005 is not found' 00:28:59.145 Process with pid 61029 is not found 00:28:59.145 13:23:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61029 ]] 00:28:59.145 13:23:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61029 00:28:59.145 13:23:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61029 ']' 00:28:59.145 13:23:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61029 00:28:59.145 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61029) - No such process 00:28:59.145 13:23:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61029 is not found' 00:28:59.145 13:23:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:59.145 00:28:59.145 real 1m5.498s 00:28:59.145 user 1m49.755s 00:28:59.145 sys 0m10.463s 00:28:59.145 13:23:51 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.145 13:23:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:59.145 ************************************ 00:28:59.145 END TEST cpu_locks 00:28:59.145 ************************************ 00:28:59.145 ************************************ 00:28:59.145 END TEST event 00:28:59.145 ************************************ 00:28:59.145 00:28:59.145 real 1m40.427s 00:28:59.145 user 2m59.115s 00:28:59.145 sys 0m15.996s 00:28:59.145 13:23:51 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.145 13:23:51 event -- common/autotest_common.sh@10 -- # set +x 00:28:59.145 13:23:51 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:59.145 13:23:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:59.145 13:23:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.145 13:23:51 -- common/autotest_common.sh@10 -- # set +x 00:28:59.145 ************************************ 00:28:59.145 START TEST thread 00:28:59.145 ************************************ 00:28:59.145 13:23:51 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:59.145 * Looking for test storage... 00:28:59.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:28:59.145 13:23:51 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:59.145 13:23:51 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:59.145 13:23:51 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:28:59.145 13:23:51 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:59.145 13:23:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.145 13:23:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.145 13:23:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.145 13:23:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.145 13:23:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.145 13:23:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.145 13:23:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.145 13:23:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.145 13:23:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.145 13:23:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.145 13:23:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.145 13:23:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:28:59.145 13:23:51 thread -- scripts/common.sh@345 -- # : 1 00:28:59.145 13:23:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.145 13:23:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.145 13:23:51 thread -- scripts/common.sh@365 -- # decimal 1 00:28:59.145 13:23:51 thread -- scripts/common.sh@353 -- # local d=1 00:28:59.145 13:23:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.145 13:23:51 thread -- scripts/common.sh@355 -- # echo 1 00:28:59.145 13:23:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.145 13:23:51 thread -- scripts/common.sh@366 -- # decimal 2 00:28:59.145 13:23:51 thread -- scripts/common.sh@353 -- # local d=2 00:28:59.145 13:23:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.145 13:23:51 thread -- scripts/common.sh@355 -- # echo 2 00:28:59.145 13:23:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.145 13:23:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.145 13:23:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.145 13:23:51 thread -- scripts/common.sh@368 -- # return 0 00:28:59.145 13:23:51 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.145 13:23:51 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:59.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.145 --rc genhtml_branch_coverage=1 00:28:59.145 --rc genhtml_function_coverage=1 00:28:59.145 --rc genhtml_legend=1 00:28:59.145 --rc geninfo_all_blocks=1 00:28:59.145 --rc geninfo_unexecuted_blocks=1 00:28:59.145 00:28:59.145 ' 00:28:59.145 13:23:51 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:59.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.145 --rc genhtml_branch_coverage=1 00:28:59.145 --rc genhtml_function_coverage=1 00:28:59.145 --rc genhtml_legend=1 00:28:59.145 --rc geninfo_all_blocks=1 00:28:59.145 --rc geninfo_unexecuted_blocks=1 00:28:59.145 00:28:59.145 ' 00:28:59.145 13:23:51 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:59.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.145 --rc genhtml_branch_coverage=1 00:28:59.145 --rc genhtml_function_coverage=1 00:28:59.145 --rc genhtml_legend=1 00:28:59.145 --rc geninfo_all_blocks=1 00:28:59.145 --rc geninfo_unexecuted_blocks=1 00:28:59.145 00:28:59.146 ' 00:28:59.146 13:23:51 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:59.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.146 --rc genhtml_branch_coverage=1 00:28:59.146 --rc genhtml_function_coverage=1 00:28:59.146 --rc genhtml_legend=1 00:28:59.146 --rc geninfo_all_blocks=1 00:28:59.146 --rc geninfo_unexecuted_blocks=1 00:28:59.146 00:28:59.146 ' 00:28:59.146 13:23:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:59.146 13:23:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:28:59.146 13:23:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.146 13:23:51 thread -- common/autotest_common.sh@10 -- # set +x 00:28:59.146 ************************************ 00:28:59.146 START TEST thread_poller_perf 00:28:59.146 ************************************ 00:28:59.146 13:23:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:59.146 [2024-12-06 13:23:52.040216] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:28:59.146 [2024-12-06 13:23:52.041516] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61246 ] 00:28:59.146 [2024-12-06 13:23:52.242623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.402 [2024-12-06 13:23:52.434232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.402 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:29:00.775 [2024-12-06T13:23:53.875Z] ====================================== 00:29:00.775 [2024-12-06T13:23:53.875Z] busy:2115577100 (cyc) 00:29:00.775 [2024-12-06T13:23:53.875Z] total_run_count: 349000 00:29:00.775 [2024-12-06T13:23:53.875Z] tsc_hz: 2100000000 (cyc) 00:29:00.775 [2024-12-06T13:23:53.875Z] ====================================== 00:29:00.775 [2024-12-06T13:23:53.875Z] poller_cost: 6061 (cyc), 2886 (nsec) 00:29:00.775 00:29:00.775 ************************************ 00:29:00.775 END TEST thread_poller_perf 00:29:00.775 ************************************ 00:29:00.775 real 0m1.726s 00:29:00.775 user 0m1.470s 00:29:00.775 sys 0m0.146s 00:29:00.775 13:23:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.775 13:23:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:29:00.775 13:23:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:29:00.775 13:23:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:29:00.775 13:23:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.775 13:23:53 thread -- common/autotest_common.sh@10 -- # set +x 00:29:00.775 ************************************ 00:29:00.775 START TEST thread_poller_perf 00:29:00.775 ************************************ 00:29:00.775 13:23:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:29:00.775 [2024-12-06 13:23:53.826413] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:00.775 [2024-12-06 13:23:53.826597] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61288 ] 00:29:01.033 [2024-12-06 13:23:54.026463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.290 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:29:01.290 [2024-12-06 13:23:54.180898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.665 [2024-12-06T13:23:55.765Z] ====================================== 00:29:02.665 [2024-12-06T13:23:55.765Z] busy:2103629586 (cyc) 00:29:02.665 [2024-12-06T13:23:55.765Z] total_run_count: 4447000 00:29:02.665 [2024-12-06T13:23:55.765Z] tsc_hz: 2100000000 (cyc) 00:29:02.665 [2024-12-06T13:23:55.765Z] ====================================== 00:29:02.665 [2024-12-06T13:23:55.765Z] poller_cost: 473 (cyc), 225 (nsec) 00:29:02.665 ************************************ 00:29:02.665 END TEST thread_poller_perf 00:29:02.665 ************************************ 00:29:02.665 00:29:02.665 real 0m1.676s 00:29:02.665 user 0m1.431s 00:29:02.665 sys 0m0.136s 00:29:02.665 13:23:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.665 13:23:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:29:02.665 13:23:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:29:02.665 ************************************ 00:29:02.665 END TEST thread 00:29:02.665 ************************************ 00:29:02.665 00:29:02.665 real 0m3.719s 00:29:02.665 user 0m3.042s 00:29:02.665 sys 0m0.461s 00:29:02.665 13:23:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.665 13:23:55 thread -- common/autotest_common.sh@10 -- # set +x 00:29:02.665 13:23:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:29:02.665 13:23:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:29:02.665 13:23:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:02.665 13:23:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.665 13:23:55 -- common/autotest_common.sh@10 -- # set +x 00:29:02.665 ************************************ 00:29:02.665 START TEST app_cmdline 00:29:02.665 ************************************ 00:29:02.665 13:23:55 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:29:02.665 * Looking for test storage... 00:29:02.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:29:02.665 13:23:55 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:02.665 13:23:55 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:29:02.665 13:23:55 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:02.665 13:23:55 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:02.665 13:23:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.665 13:23:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.665 13:23:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.665 13:23:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.666 13:23:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:29:02.937 13:23:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.937 13:23:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.937 13:23:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.937 13:23:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:29:02.937 13:23:55 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.937 13:23:55 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:02.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.937 --rc genhtml_branch_coverage=1 00:29:02.937 --rc genhtml_function_coverage=1 00:29:02.937 --rc genhtml_legend=1 00:29:02.937 --rc geninfo_all_blocks=1 00:29:02.937 --rc geninfo_unexecuted_blocks=1 00:29:02.937 00:29:02.937 ' 00:29:02.937 13:23:55 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:02.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.937 --rc genhtml_branch_coverage=1 00:29:02.937 --rc genhtml_function_coverage=1 00:29:02.937 --rc genhtml_legend=1 00:29:02.937 --rc geninfo_all_blocks=1 00:29:02.937 --rc geninfo_unexecuted_blocks=1 00:29:02.937 00:29:02.937 ' 00:29:02.937 13:23:55 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:02.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.937 --rc genhtml_branch_coverage=1 00:29:02.937 --rc genhtml_function_coverage=1 00:29:02.937 --rc genhtml_legend=1 00:29:02.937 --rc geninfo_all_blocks=1 00:29:02.937 --rc geninfo_unexecuted_blocks=1 00:29:02.937 00:29:02.937 ' 00:29:02.937 13:23:55 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:02.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.937 --rc genhtml_branch_coverage=1 00:29:02.937 --rc genhtml_function_coverage=1 00:29:02.937 --rc genhtml_legend=1 00:29:02.937 --rc geninfo_all_blocks=1 00:29:02.937 --rc geninfo_unexecuted_blocks=1 00:29:02.937 00:29:02.937 ' 00:29:02.937 13:23:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:29:02.937 13:23:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61371 00:29:02.937 13:23:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61371 00:29:02.937 13:23:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:29:02.937 13:23:55 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61371 ']' 00:29:02.937 13:23:55 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.937 13:23:55 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.937 13:23:55 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.937 13:23:55 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.937 13:23:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:29:02.937 [2024-12-06 13:23:55.945231] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:02.937 [2024-12-06 13:23:55.946007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61371 ] 00:29:03.195 [2024-12-06 13:23:56.148628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.453 [2024-12-06 13:23:56.301155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.391 13:23:57 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.391 13:23:57 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:29:04.391 13:23:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:29:04.649 { 00:29:04.649 "version": "SPDK v25.01-pre git sha1 88d8055fc", 00:29:04.649 "fields": { 00:29:04.649 "major": 25, 00:29:04.649 "minor": 1, 00:29:04.649 "patch": 0, 00:29:04.649 "suffix": "-pre", 00:29:04.649 "commit": "88d8055fc" 00:29:04.649 } 00:29:04.649 } 00:29:04.649 13:23:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:29:04.649 13:23:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:29:04.649 13:23:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:29:04.649 13:23:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:29:04.649 13:23:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:29:04.649 13:23:57 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.649 13:23:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:29:04.649 13:23:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:29:04.649 13:23:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:29:04.649 13:23:57 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.908 13:23:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:29:04.908 13:23:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:29:04.908 13:23:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:29:04.908 13:23:57 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:29:04.908 13:23:57 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:29:04.908 13:23:57 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:04.908 13:23:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.908 13:23:57 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:04.908 13:23:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.908 13:23:57 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:04.908 13:23:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.908 13:23:57 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:04.908 13:23:57 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:04.908 13:23:57 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:29:05.167 request: 00:29:05.167 { 00:29:05.167 "method": "env_dpdk_get_mem_stats", 00:29:05.167 "req_id": 1 00:29:05.167 } 00:29:05.167 Got JSON-RPC error response 00:29:05.167 response: 00:29:05.167 { 00:29:05.167 "code": -32601, 00:29:05.167 "message": "Method not found" 00:29:05.167 } 00:29:05.167 13:23:58 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:29:05.167 13:23:58 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:05.167 13:23:58 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:05.167 13:23:58 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:05.167 13:23:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61371 00:29:05.167 13:23:58 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61371 ']' 00:29:05.167 13:23:58 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61371 00:29:05.167 13:23:58 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:29:05.167 13:23:58 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.167 13:23:58 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61371 00:29:05.167 13:23:58 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:05.167 killing process with pid 61371 00:29:05.168 13:23:58 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:05.168 13:23:58 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61371' 00:29:05.168 13:23:58 app_cmdline -- common/autotest_common.sh@973 -- # kill 61371 00:29:05.168 13:23:58 app_cmdline -- common/autotest_common.sh@978 -- # wait 61371 00:29:08.543 00:29:08.543 real 0m5.404s 00:29:08.543 user 0m5.549s 00:29:08.543 sys 0m0.954s 00:29:08.543 13:24:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:08.543 ************************************ 00:29:08.543 END TEST app_cmdline 00:29:08.543 ************************************ 00:29:08.543 13:24:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:29:08.543 13:24:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:29:08.543 13:24:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:08.543 13:24:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:08.543 13:24:00 -- common/autotest_common.sh@10 -- # set +x 00:29:08.543 ************************************ 00:29:08.543 START TEST version 00:29:08.543 ************************************ 00:29:08.543 13:24:01 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:29:08.543 * Looking for test storage... 00:29:08.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:29:08.543 13:24:01 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:08.543 13:24:01 version -- common/autotest_common.sh@1711 -- # lcov --version 00:29:08.543 13:24:01 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:08.543 13:24:01 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:08.543 13:24:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.543 13:24:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.543 13:24:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.543 13:24:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.543 13:24:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.543 13:24:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.543 13:24:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.543 13:24:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.543 13:24:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.543 13:24:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.543 13:24:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.543 13:24:01 version -- scripts/common.sh@344 -- # case "$op" in 00:29:08.543 13:24:01 version -- scripts/common.sh@345 -- # : 1 00:29:08.543 13:24:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.543 13:24:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.543 13:24:01 version -- scripts/common.sh@365 -- # decimal 1 00:29:08.543 13:24:01 version -- scripts/common.sh@353 -- # local d=1 00:29:08.543 13:24:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.543 13:24:01 version -- scripts/common.sh@355 -- # echo 1 00:29:08.543 13:24:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.543 13:24:01 version -- scripts/common.sh@366 -- # decimal 2 00:29:08.543 13:24:01 version -- scripts/common.sh@353 -- # local d=2 00:29:08.543 13:24:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.543 13:24:01 version -- scripts/common.sh@355 -- # echo 2 00:29:08.543 13:24:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.543 13:24:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.543 13:24:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.543 13:24:01 version -- scripts/common.sh@368 -- # return 0 00:29:08.543 13:24:01 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.543 13:24:01 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:08.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.543 --rc genhtml_branch_coverage=1 00:29:08.543 --rc genhtml_function_coverage=1 00:29:08.543 --rc genhtml_legend=1 00:29:08.543 --rc geninfo_all_blocks=1 00:29:08.543 --rc geninfo_unexecuted_blocks=1 00:29:08.543 00:29:08.543 ' 00:29:08.543 13:24:01 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:08.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.543 --rc genhtml_branch_coverage=1 00:29:08.543 --rc genhtml_function_coverage=1 00:29:08.544 --rc genhtml_legend=1 00:29:08.544 --rc geninfo_all_blocks=1 00:29:08.544 --rc geninfo_unexecuted_blocks=1 00:29:08.544 00:29:08.544 ' 00:29:08.544 13:24:01 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:08.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.544 --rc genhtml_branch_coverage=1 00:29:08.544 --rc genhtml_function_coverage=1 00:29:08.544 --rc genhtml_legend=1 00:29:08.544 --rc geninfo_all_blocks=1 00:29:08.544 --rc geninfo_unexecuted_blocks=1 00:29:08.544 00:29:08.544 ' 00:29:08.544 13:24:01 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:08.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.544 --rc genhtml_branch_coverage=1 00:29:08.544 --rc genhtml_function_coverage=1 00:29:08.544 --rc genhtml_legend=1 00:29:08.544 --rc geninfo_all_blocks=1 00:29:08.544 --rc geninfo_unexecuted_blocks=1 00:29:08.544 00:29:08.544 ' 00:29:08.544 13:24:01 version -- app/version.sh@17 -- # get_header_version major 00:29:08.544 13:24:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:29:08.544 13:24:01 version -- app/version.sh@14 -- # cut -f2 00:29:08.544 13:24:01 version -- app/version.sh@14 -- # tr -d '"' 00:29:08.544 13:24:01 version -- app/version.sh@17 -- # major=25 00:29:08.544 13:24:01 version -- app/version.sh@18 -- # get_header_version minor 00:29:08.544 13:24:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:29:08.544 13:24:01 version -- app/version.sh@14 -- # cut -f2 00:29:08.544 13:24:01 version -- app/version.sh@14 -- # tr -d '"' 00:29:08.544 13:24:01 version -- app/version.sh@18 -- # minor=1 00:29:08.544 13:24:01 version -- app/version.sh@19 -- # get_header_version patch 00:29:08.544 13:24:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:29:08.544 13:24:01 version -- app/version.sh@14 -- # cut -f2 00:29:08.544 13:24:01 version -- app/version.sh@14 -- # tr -d '"' 00:29:08.544 13:24:01 version -- app/version.sh@19 -- # patch=0 00:29:08.544 13:24:01 version -- app/version.sh@20 -- # get_header_version suffix 00:29:08.544 13:24:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:29:08.544 13:24:01 version -- app/version.sh@14 -- # cut -f2 00:29:08.544 13:24:01 version -- app/version.sh@14 -- # tr -d '"' 00:29:08.544 13:24:01 version -- app/version.sh@20 -- # suffix=-pre 00:29:08.544 13:24:01 version -- app/version.sh@22 -- # version=25.1 00:29:08.544 13:24:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:29:08.544 13:24:01 version -- app/version.sh@28 -- # version=25.1rc0 00:29:08.544 13:24:01 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:08.544 13:24:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:29:08.544 13:24:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:29:08.544 13:24:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:29:08.544 00:29:08.544 real 0m0.293s 00:29:08.544 user 0m0.176s 00:29:08.544 sys 0m0.160s 00:29:08.544 13:24:01 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:08.544 13:24:01 version -- common/autotest_common.sh@10 -- # set +x 00:29:08.544 ************************************ 00:29:08.544 END TEST version 00:29:08.544 ************************************ 00:29:08.544 13:24:01 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:29:08.544 13:24:01 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:29:08.544 13:24:01 -- spdk/autotest.sh@194 -- # uname -s 00:29:08.544 13:24:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:29:08.544 13:24:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:29:08.544 13:24:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:29:08.544 13:24:01 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:29:08.544 13:24:01 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:08.544 13:24:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:08.544 13:24:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:08.544 13:24:01 -- common/autotest_common.sh@10 -- # set +x 00:29:08.544 ************************************ 00:29:08.544 START TEST blockdev_nvme 00:29:08.544 ************************************ 00:29:08.544 13:24:01 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:08.544 * Looking for test storage... 00:29:08.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:08.544 13:24:01 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:08.544 13:24:01 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:29:08.544 13:24:01 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:08.544 13:24:01 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.544 13:24:01 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:29:08.544 13:24:01 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.544 13:24:01 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:08.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.544 --rc genhtml_branch_coverage=1 00:29:08.544 --rc genhtml_function_coverage=1 00:29:08.544 --rc genhtml_legend=1 00:29:08.544 --rc geninfo_all_blocks=1 00:29:08.544 --rc geninfo_unexecuted_blocks=1 00:29:08.544 00:29:08.544 ' 00:29:08.544 13:24:01 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:08.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.544 --rc genhtml_branch_coverage=1 00:29:08.544 --rc genhtml_function_coverage=1 00:29:08.544 --rc genhtml_legend=1 00:29:08.544 --rc geninfo_all_blocks=1 00:29:08.544 --rc geninfo_unexecuted_blocks=1 00:29:08.544 00:29:08.544 ' 00:29:08.544 13:24:01 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:08.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.544 --rc genhtml_branch_coverage=1 00:29:08.544 --rc genhtml_function_coverage=1 00:29:08.544 --rc genhtml_legend=1 00:29:08.544 --rc geninfo_all_blocks=1 00:29:08.544 --rc geninfo_unexecuted_blocks=1 00:29:08.544 00:29:08.544 ' 00:29:08.544 13:24:01 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:08.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.544 --rc genhtml_branch_coverage=1 00:29:08.544 --rc genhtml_function_coverage=1 00:29:08.544 --rc genhtml_legend=1 00:29:08.544 --rc geninfo_all_blocks=1 00:29:08.544 --rc geninfo_unexecuted_blocks=1 00:29:08.544 00:29:08.544 ' 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:08.544 13:24:01 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:29:08.544 13:24:01 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:29:08.545 13:24:01 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:29:08.545 13:24:01 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:29:08.545 13:24:01 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61571 00:29:08.545 13:24:01 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:08.545 13:24:01 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61571 00:29:08.545 13:24:01 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61571 ']' 00:29:08.545 13:24:01 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.545 13:24:01 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.545 13:24:01 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:08.545 13:24:01 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.545 13:24:01 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.545 13:24:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:08.803 [2024-12-06 13:24:01.754890] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:08.803 [2024-12-06 13:24:01.755099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61571 ] 00:29:09.173 [2024-12-06 13:24:01.957315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.173 [2024-12-06 13:24:02.120887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:29:10.561 13:24:03 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:29:10.561 13:24:03 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:29:10.561 13:24:03 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:29:10.561 13:24:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:29:10.561 13:24:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:10.561 13:24:03 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.561 13:24:03 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.561 13:24:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:29:10.561 13:24:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.561 13:24:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.561 13:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:10.821 13:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.821 13:24:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:10.821 13:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.821 13:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:10.821 13:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.821 13:24:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:29:10.821 13:24:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:29:10.821 13:24:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.821 13:24:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:29:10.821 13:24:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:10.821 13:24:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.821 13:24:03 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:29:10.821 13:24:03 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:29:10.822 13:24:03 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "236d5396-5046-46d6-b4e2-750b1f8c9ba8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "236d5396-5046-46d6-b4e2-750b1f8c9ba8",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "60e72d00-229d-428a-bc41-99b01a867f72"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "60e72d00-229d-428a-bc41-99b01a867f72",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2b7873b7-d107-46bd-a7cb-8838ecd6781c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2b7873b7-d107-46bd-a7cb-8838ecd6781c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a614c3dd-82dc-4c1a-9fa0-639acebefa80"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a614c3dd-82dc-4c1a-9fa0-639acebefa80",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0093cca9-a42e-4a41-b5f5-27ed802737ce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0093cca9-a42e-4a41-b5f5-27ed802737ce",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "453d2daa-fe9a-466d-a131-e1a62bee69b1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "453d2daa-fe9a-466d-a131-e1a62bee69b1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:10.822 13:24:03 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:29:10.822 13:24:03 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:29:10.822 13:24:03 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:29:10.822 13:24:03 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61571 00:29:10.822 13:24:03 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61571 ']' 00:29:10.822 13:24:03 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61571 00:29:10.822 13:24:03 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:29:10.822 13:24:03 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.822 13:24:03 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61571 00:29:11.082 13:24:03 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:11.082 13:24:03 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:11.082 killing process with pid 61571 00:29:11.082 13:24:03 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61571' 00:29:11.082 13:24:03 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61571 00:29:11.082 13:24:03 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61571 00:29:14.367 13:24:06 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:14.367 13:24:06 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:14.367 13:24:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:14.367 13:24:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.367 13:24:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:14.367 ************************************ 00:29:14.367 START TEST bdev_hello_world 00:29:14.367 ************************************ 00:29:14.367 13:24:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:14.367 [2024-12-06 13:24:06.939751] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:14.367 [2024-12-06 13:24:06.939969] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61677 ] 00:29:14.367 [2024-12-06 13:24:07.137160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.367 [2024-12-06 13:24:07.289742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.304 [2024-12-06 13:24:08.061639] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:15.304 [2024-12-06 13:24:08.061730] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:15.304 [2024-12-06 13:24:08.061785] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:15.304 [2024-12-06 13:24:08.065503] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:15.304 [2024-12-06 13:24:08.066198] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:15.304 [2024-12-06 13:24:08.066247] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:15.304 [2024-12-06 13:24:08.066583] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:15.304 00:29:15.304 [2024-12-06 13:24:08.066646] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:16.678 00:29:16.678 real 0m2.622s 00:29:16.678 user 0m2.126s 00:29:16.678 sys 0m0.385s 00:29:16.678 13:24:09 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.678 13:24:09 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:16.678 ************************************ 00:29:16.678 END TEST bdev_hello_world 00:29:16.678 ************************************ 00:29:16.678 13:24:09 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:29:16.678 13:24:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:16.678 13:24:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.678 13:24:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:16.678 ************************************ 00:29:16.678 START TEST bdev_bounds 00:29:16.678 ************************************ 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61729 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:16.678 Process bdevio pid: 61729 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61729' 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61729 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61729 ']' 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.678 13:24:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:16.678 [2024-12-06 13:24:09.618139] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:16.678 [2024-12-06 13:24:09.618370] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61729 ] 00:29:16.935 [2024-12-06 13:24:09.822946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:16.935 [2024-12-06 13:24:09.980925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.935 [2024-12-06 13:24:09.981111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.935 [2024-12-06 13:24:09.981168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.866 13:24:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.866 13:24:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:29:17.866 13:24:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:17.866 I/O targets: 00:29:17.866 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:29:17.866 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:17.866 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:17.866 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:17.866 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:17.866 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:29:17.866 00:29:17.866 00:29:17.866 CUnit - A unit testing framework for C - Version 2.1-3 00:29:17.866 http://cunit.sourceforge.net/ 00:29:17.866 00:29:17.866 00:29:17.866 Suite: bdevio tests on: Nvme3n1 00:29:17.866 Test: blockdev write read block ...passed 00:29:17.866 Test: blockdev write zeroes read block ...passed 00:29:17.866 Test: blockdev write zeroes read no split ...passed 00:29:18.123 Test: blockdev write zeroes read split ...passed 00:29:18.123 Test: blockdev write zeroes read split partial ...passed 00:29:18.123 Test: blockdev reset ...[2024-12-06 13:24:11.014481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:29:18.123 [2024-12-06 13:24:11.019137] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:29:18.123 passed 00:29:18.123 Test: blockdev write read 8 blocks ...passed 00:29:18.123 Test: blockdev write read size > 128k ...passed 00:29:18.123 Test: blockdev write read invalid size ...passed 00:29:18.123 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.123 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.123 Test: blockdev write read max offset ...passed 00:29:18.123 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.123 Test: blockdev writev readv 8 blocks ...passed 00:29:18.123 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.123 Test: blockdev writev readv block ...passed 00:29:18.123 Test: blockdev writev readv size > 128k ...passed 00:29:18.123 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.123 Test: blockdev comparev and writev ...[2024-12-06 13:24:11.028989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af20a000 len:0x1000 00:29:18.123 [2024-12-06 13:24:11.029176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:18.123 passed 00:29:18.123 Test: blockdev nvme passthru rw ...passed 00:29:18.123 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:24:11.030157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:18.123 [2024-12-06 13:24:11.030327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:18.123 passed 00:29:18.123 Test: blockdev nvme admin passthru ...passed 00:29:18.123 Test: blockdev copy ...passed 00:29:18.123 Suite: bdevio tests on: Nvme2n3 00:29:18.123 Test: blockdev write read block ...passed 00:29:18.123 Test: blockdev write zeroes read block ...passed 00:29:18.123 Test: blockdev write zeroes read no split ...passed 00:29:18.123 Test: blockdev write zeroes read split ...passed 00:29:18.123 Test: blockdev write zeroes read split partial ...passed 00:29:18.123 Test: blockdev reset ...[2024-12-06 13:24:11.118998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:18.123 [2024-12-06 13:24:11.123730] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:18.123 passed 00:29:18.123 Test: blockdev write read 8 blocks ...passed 00:29:18.123 Test: blockdev write read size > 128k ...passed 00:29:18.123 Test: blockdev write read invalid size ...passed 00:29:18.123 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.123 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.123 Test: blockdev write read max offset ...passed 00:29:18.123 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.123 Test: blockdev writev readv 8 blocks ...passed 00:29:18.123 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.123 Test: blockdev writev readv block ...passed 00:29:18.123 Test: blockdev writev readv size > 128k ...passed 00:29:18.123 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.123 Test: blockdev comparev and writev ...[2024-12-06 13:24:11.132819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x291c06000 len:0x1000 00:29:18.123 [2024-12-06 13:24:11.132886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:18.123 passed 00:29:18.123 Test: blockdev nvme passthru rw ...passed 00:29:18.123 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:24:11.133739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:18.123 [2024-12-06 13:24:11.133782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:18.123 passed 00:29:18.123 Test: blockdev nvme admin passthru ...passed 00:29:18.123 Test: blockdev copy ...passed 00:29:18.123 Suite: bdevio tests on: Nvme2n2 00:29:18.123 Test: blockdev write read block ...passed 00:29:18.123 Test: blockdev write zeroes read block ...passed 00:29:18.123 Test: blockdev write zeroes read no split ...passed 00:29:18.123 Test: blockdev write zeroes read split ...passed 00:29:18.380 Test: blockdev write zeroes read split partial ...passed 00:29:18.381 Test: blockdev reset ...[2024-12-06 13:24:11.224784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:18.381 [2024-12-06 13:24:11.229473] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:18.381 passed 00:29:18.381 Test: blockdev write read 8 blocks ...passed 00:29:18.381 Test: blockdev write read size > 128k ...passed 00:29:18.381 Test: blockdev write read invalid size ...passed 00:29:18.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.381 Test: blockdev write read max offset ...passed 00:29:18.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.381 Test: blockdev writev readv 8 blocks ...passed 00:29:18.381 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.381 Test: blockdev writev readv block ...passed 00:29:18.381 Test: blockdev writev readv size > 128k ...passed 00:29:18.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.381 Test: blockdev comparev and writev ...passed 00:29:18.381 Test: blockdev nvme passthru rw ...[2024-12-06 13:24:11.238657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf23c000 len:0x1000 00:29:18.381 [2024-12-06 13:24:11.238725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:18.381 passed 00:29:18.381 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:24:11.239584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:18.381 [2024-12-06 13:24:11.239622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:18.381 passed 00:29:18.381 Test: blockdev nvme admin passthru ...passed 00:29:18.381 Test: blockdev copy ...passed 00:29:18.381 Suite: bdevio tests on: Nvme2n1 00:29:18.381 Test: blockdev write read block ...passed 00:29:18.381 Test: blockdev write zeroes read block ...passed 00:29:18.381 Test: blockdev write zeroes read no split ...passed 00:29:18.381 Test: blockdev write zeroes read split ...passed 00:29:18.381 Test: blockdev write zeroes read split partial ...passed 00:29:18.381 Test: blockdev reset ...[2024-12-06 13:24:11.326354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:18.381 passed 00:29:18.381 Test: blockdev write read 8 blocks ...[2024-12-06 13:24:11.331428] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:18.381 passed 00:29:18.381 Test: blockdev write read size > 128k ...passed 00:29:18.381 Test: blockdev write read invalid size ...passed 00:29:18.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.381 Test: blockdev write read max offset ...passed 00:29:18.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.381 Test: blockdev writev readv 8 blocks ...passed 00:29:18.381 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.381 Test: blockdev writev readv block ...passed 00:29:18.381 Test: blockdev writev readv size > 128k ...passed 00:29:18.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.381 Test: blockdev comparev and writev ...[2024-12-06 13:24:11.339906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf238000 len:0x1000 00:29:18.381 [2024-12-06 13:24:11.339981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:18.381 passed 00:29:18.381 Test: blockdev nvme passthru rw ...passed 00:29:18.381 Test: blockdev nvme passthru vendor specific ...passed 00:29:18.381 Test: blockdev nvme admin passthru ...[2024-12-06 13:24:11.340701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:18.381 [2024-12-06 13:24:11.340738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:18.381 passed 00:29:18.381 Test: blockdev copy ...passed 00:29:18.381 Suite: bdevio tests on: Nvme1n1 00:29:18.381 Test: blockdev write read block ...passed 00:29:18.381 Test: blockdev write zeroes read block ...passed 00:29:18.381 Test: blockdev write zeroes read no split ...passed 00:29:18.381 Test: blockdev write zeroes read split ...passed 00:29:18.381 Test: blockdev write zeroes read split partial ...passed 00:29:18.381 Test: blockdev reset ...[2024-12-06 13:24:11.476997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:29:18.648 passed 00:29:18.648 Test: blockdev write read 8 blocks ...[2024-12-06 13:24:11.481394] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:29:18.648 passed 00:29:18.648 Test: blockdev write read size > 128k ...passed 00:29:18.648 Test: blockdev write read invalid size ...passed 00:29:18.648 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.648 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.648 Test: blockdev write read max offset ...passed 00:29:18.648 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.648 Test: blockdev writev readv 8 blocks ...passed 00:29:18.648 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.648 Test: blockdev writev readv block ...passed 00:29:18.648 Test: blockdev writev readv size > 128k ...passed 00:29:18.648 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.648 Test: blockdev comparev and writev ...[2024-12-06 13:24:11.490344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf234000 len:0x1000 00:29:18.648 [2024-12-06 13:24:11.490435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:18.648 passed 00:29:18.648 Test: blockdev nvme passthru rw ...passed 00:29:18.648 Test: blockdev nvme passthru vendor specific ...passed 00:29:18.648 Test: blockdev nvme admin passthru ...[2024-12-06 13:24:11.491390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:18.648 [2024-12-06 13:24:11.491441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:18.648 passed 00:29:18.648 Test: blockdev copy ...passed 00:29:18.648 Suite: bdevio tests on: Nvme0n1 00:29:18.648 Test: blockdev write read block ...passed 00:29:18.648 Test: blockdev write zeroes read block ...passed 00:29:18.648 Test: blockdev write zeroes read no split ...passed 00:29:18.648 Test: blockdev write zeroes read split ...passed 00:29:18.648 Test: blockdev write zeroes read split partial ...passed 00:29:18.648 Test: blockdev reset ...[2024-12-06 13:24:11.578599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:29:18.648 [2024-12-06 13:24:11.583328] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:29:18.648 passed 00:29:18.648 Test: blockdev write read 8 blocks ...passed 00:29:18.648 Test: blockdev write read size > 128k ...passed 00:29:18.648 Test: blockdev write read invalid size ...passed 00:29:18.648 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:18.648 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:18.648 Test: blockdev write read max offset ...passed 00:29:18.648 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:18.648 Test: blockdev writev readv 8 blocks ...passed 00:29:18.648 Test: blockdev writev readv 30 x 1block ...passed 00:29:18.648 Test: blockdev writev readv block ...passed 00:29:18.648 Test: blockdev writev readv size > 128k ...passed 00:29:18.648 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:18.648 Test: blockdev comparev and writev ...passed 00:29:18.648 Test: blockdev nvme passthru rw ...[2024-12-06 13:24:11.591316] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:29:18.648 separate metadata which is not supported yet. 00:29:18.648 passed 00:29:18.648 Test: blockdev nvme passthru vendor specific ...passed 00:29:18.648 Test: blockdev nvme admin passthru ...[2024-12-06 13:24:11.591900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:29:18.648 [2024-12-06 13:24:11.591965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:29:18.648 passed 00:29:18.648 Test: blockdev copy ...passed 00:29:18.648 00:29:18.648 Run Summary: Type Total Ran Passed Failed Inactive 00:29:18.648 suites 6 6 n/a 0 0 00:29:18.648 tests 138 138 138 0 0 00:29:18.648 asserts 893 893 893 0 n/a 00:29:18.648 00:29:18.648 Elapsed time = 1.761 seconds 00:29:18.648 0 00:29:18.648 13:24:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61729 00:29:18.648 13:24:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61729 ']' 00:29:18.648 13:24:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61729 00:29:18.648 13:24:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:29:18.648 13:24:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.648 13:24:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61729 00:29:18.648 13:24:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:18.648 13:24:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:18.648 13:24:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61729' 00:29:18.648 killing process with pid 61729 00:29:18.648 13:24:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61729 00:29:18.648 13:24:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61729 00:29:20.021 13:24:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:20.021 00:29:20.021 real 0m3.461s 00:29:20.021 user 0m8.695s 00:29:20.021 sys 0m0.612s 00:29:20.021 13:24:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.021 13:24:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:20.021 ************************************ 00:29:20.021 END TEST bdev_bounds 00:29:20.021 ************************************ 00:29:20.021 13:24:12 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:20.021 13:24:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:20.021 13:24:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.021 13:24:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:20.021 ************************************ 00:29:20.021 START TEST bdev_nbd 00:29:20.021 ************************************ 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61795 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:20.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61795 /var/tmp/spdk-nbd.sock 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61795 ']' 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.021 13:24:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:20.279 [2024-12-06 13:24:13.128888] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:20.279 [2024-12-06 13:24:13.129062] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.279 [2024-12-06 13:24:13.323018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.537 [2024-12-06 13:24:13.516207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:21.542 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:21.801 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:21.802 1+0 records in 00:29:21.802 1+0 records out 00:29:21.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616374 s, 6.6 MB/s 00:29:21.802 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.802 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:21.802 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.802 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:21.802 13:24:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:21.802 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:21.802 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:21.802 13:24:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:22.060 1+0 records in 00:29:22.060 1+0 records out 00:29:22.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718807 s, 5.7 MB/s 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:22.060 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:29:22.316 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:29:22.316 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:22.574 1+0 records in 00:29:22.574 1+0 records out 00:29:22.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588829 s, 7.0 MB/s 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:22.574 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:22.831 1+0 records in 00:29:22.831 1+0 records out 00:29:22.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706768 s, 5.8 MB/s 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:22.831 13:24:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.096 1+0 records in 00:29:23.096 1+0 records out 00:29:23.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492046 s, 8.3 MB/s 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:23.096 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.355 1+0 records in 00:29:23.355 1+0 records out 00:29:23.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601327 s, 6.8 MB/s 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:23.355 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:23.613 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd0", 00:29:23.613 "bdev_name": "Nvme0n1" 00:29:23.613 }, 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd1", 00:29:23.613 "bdev_name": "Nvme1n1" 00:29:23.613 }, 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd2", 00:29:23.613 "bdev_name": "Nvme2n1" 00:29:23.613 }, 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd3", 00:29:23.613 "bdev_name": "Nvme2n2" 00:29:23.613 }, 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd4", 00:29:23.613 "bdev_name": "Nvme2n3" 00:29:23.613 }, 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd5", 00:29:23.613 "bdev_name": "Nvme3n1" 00:29:23.613 } 00:29:23.613 ]' 00:29:23.613 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:23.613 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd0", 00:29:23.613 "bdev_name": "Nvme0n1" 00:29:23.613 }, 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd1", 00:29:23.613 "bdev_name": "Nvme1n1" 00:29:23.613 }, 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd2", 00:29:23.613 "bdev_name": "Nvme2n1" 00:29:23.613 }, 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd3", 00:29:23.613 "bdev_name": "Nvme2n2" 00:29:23.613 }, 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd4", 00:29:23.613 "bdev_name": "Nvme2n3" 00:29:23.613 }, 00:29:23.613 { 00:29:23.613 "nbd_device": "/dev/nbd5", 00:29:23.613 "bdev_name": "Nvme3n1" 00:29:23.613 } 00:29:23.613 ]' 00:29:23.613 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:23.872 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:29:23.872 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:23.872 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:29:23.872 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:23.872 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:23.872 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:23.872 13:24:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:24.130 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:24.130 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:24.130 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:24.130 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.130 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.130 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:24.130 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:24.130 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.130 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.130 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:24.388 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:24.388 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:24.388 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:24.388 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.388 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.388 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:24.388 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:24.388 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.388 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.389 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:29:24.648 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:29:24.648 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:29:24.648 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:29:24.648 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.648 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.648 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:29:24.648 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:24.648 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.648 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.648 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:29:24.906 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:29:24.906 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:29:24.906 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:29:24.906 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.906 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.906 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:29:24.906 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:24.906 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.906 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.906 13:24:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:29:25.163 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:29:25.163 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:29:25.163 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:29:25.163 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:25.163 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:25.163 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:29:25.163 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:25.163 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:25.163 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:25.163 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:29:25.425 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:29:25.425 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:29:25.425 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:29:25.425 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:25.425 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:25.425 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:29:25.425 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:25.425 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:25.425 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:25.425 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:25.425 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:25.993 13:24:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:26.252 /dev/nbd0 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:26.252 1+0 records in 00:29:26.252 1+0 records out 00:29:26.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479046 s, 8.6 MB/s 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:26.252 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:29:26.512 /dev/nbd1 00:29:26.512 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:26.512 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:26.512 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:26.512 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:26.512 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:26.512 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:26.512 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:26.770 1+0 records in 00:29:26.770 1+0 records out 00:29:26.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606103 s, 6.8 MB/s 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:26.770 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:29:26.770 /dev/nbd10 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:27.031 1+0 records in 00:29:27.031 1+0 records out 00:29:27.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656583 s, 6.2 MB/s 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:27.031 13:24:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:29:27.291 /dev/nbd11 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:27.291 1+0 records in 00:29:27.291 1+0 records out 00:29:27.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729127 s, 5.6 MB/s 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:27.291 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:29:27.551 /dev/nbd12 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:27.551 1+0 records in 00:29:27.551 1+0 records out 00:29:27.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760549 s, 5.4 MB/s 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:27.551 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:29:27.811 /dev/nbd13 00:29:28.070 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:29:28.070 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:29:28.070 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:29:28.070 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:28.070 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:28.071 1+0 records in 00:29:28.071 1+0 records out 00:29:28.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000829982 s, 4.9 MB/s 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:28.071 13:24:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd0", 00:29:28.330 "bdev_name": "Nvme0n1" 00:29:28.330 }, 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd1", 00:29:28.330 "bdev_name": "Nvme1n1" 00:29:28.330 }, 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd10", 00:29:28.330 "bdev_name": "Nvme2n1" 00:29:28.330 }, 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd11", 00:29:28.330 "bdev_name": "Nvme2n2" 00:29:28.330 }, 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd12", 00:29:28.330 "bdev_name": "Nvme2n3" 00:29:28.330 }, 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd13", 00:29:28.330 "bdev_name": "Nvme3n1" 00:29:28.330 } 00:29:28.330 ]' 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd0", 00:29:28.330 "bdev_name": "Nvme0n1" 00:29:28.330 }, 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd1", 00:29:28.330 "bdev_name": "Nvme1n1" 00:29:28.330 }, 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd10", 00:29:28.330 "bdev_name": "Nvme2n1" 00:29:28.330 }, 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd11", 00:29:28.330 "bdev_name": "Nvme2n2" 00:29:28.330 }, 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd12", 00:29:28.330 "bdev_name": "Nvme2n3" 00:29:28.330 }, 00:29:28.330 { 00:29:28.330 "nbd_device": "/dev/nbd13", 00:29:28.330 "bdev_name": "Nvme3n1" 00:29:28.330 } 00:29:28.330 ]' 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:28.330 /dev/nbd1 00:29:28.330 /dev/nbd10 00:29:28.330 /dev/nbd11 00:29:28.330 /dev/nbd12 00:29:28.330 /dev/nbd13' 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:28.330 /dev/nbd1 00:29:28.330 /dev/nbd10 00:29:28.330 /dev/nbd11 00:29:28.330 /dev/nbd12 00:29:28.330 /dev/nbd13' 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:28.330 256+0 records in 00:29:28.330 256+0 records out 00:29:28.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0073122 s, 143 MB/s 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:28.330 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:28.627 256+0 records in 00:29:28.627 256+0 records out 00:29:28.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132575 s, 7.9 MB/s 00:29:28.627 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:28.627 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:28.627 256+0 records in 00:29:28.627 256+0 records out 00:29:28.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139869 s, 7.5 MB/s 00:29:28.627 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:28.627 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:29:28.887 256+0 records in 00:29:28.887 256+0 records out 00:29:28.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13351 s, 7.9 MB/s 00:29:28.887 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:28.887 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:29:28.887 256+0 records in 00:29:28.887 256+0 records out 00:29:28.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133911 s, 7.8 MB/s 00:29:28.887 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:28.887 13:24:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:29:29.147 256+0 records in 00:29:29.147 256+0 records out 00:29:29.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128977 s, 8.1 MB/s 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:29:29.147 256+0 records in 00:29:29.147 256+0 records out 00:29:29.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133133 s, 7.9 MB/s 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:29.147 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:29.716 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:29.716 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:29.716 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:29.716 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:29.716 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:29.716 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:29.716 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:29.716 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:29.716 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:29.716 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:29.975 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:29.975 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:29.975 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:29.975 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:29.975 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:29.975 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:29.976 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:29.976 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:29.976 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:29.976 13:24:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:29:30.235 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:29:30.235 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:29:30.235 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:29:30.235 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:30.235 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:30.235 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:29:30.235 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:30.235 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:30.235 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:30.235 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:30.493 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:29:30.752 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:29:30.752 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:29:30.752 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:29:30.752 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:30.752 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:30.752 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:29:30.752 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:30.752 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:30.752 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:30.752 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:30.752 13:24:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:31.010 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:29:31.268 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:31.526 malloc_lvol_verify 00:29:31.526 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:31.526 0882545b-d971-48e1-a5d2-d4c546d57451 00:29:31.785 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:31.785 d8d045f0-62ca-4164-9b05-0b89617f9682 00:29:31.785 13:24:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:32.043 /dev/nbd0 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:29:32.043 mke2fs 1.47.0 (5-Feb-2023) 00:29:32.043 Discarding device blocks: 0/4096 done 00:29:32.043 Creating filesystem with 4096 1k blocks and 1024 inodes 00:29:32.043 00:29:32.043 Allocating group tables: 0/1 done 00:29:32.043 Writing inode tables: 0/1 done 00:29:32.043 Creating journal (1024 blocks): done 00:29:32.043 Writing superblocks and filesystem accounting information: 0/1 done 00:29:32.043 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:32.043 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61795 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61795 ']' 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61795 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61795 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61795' 00:29:32.301 killing process with pid 61795 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61795 00:29:32.301 13:24:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61795 00:29:34.204 13:24:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:29:34.204 00:29:34.204 real 0m13.783s 00:29:34.204 user 0m18.198s 00:29:34.204 sys 0m5.626s 00:29:34.204 13:24:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.204 13:24:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:34.204 ************************************ 00:29:34.204 END TEST bdev_nbd 00:29:34.204 ************************************ 00:29:34.204 13:24:26 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:29:34.204 13:24:26 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:29:34.204 skipping fio tests on NVMe due to multi-ns failures. 00:29:34.204 13:24:26 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:34.204 13:24:26 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:34.204 13:24:26 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:34.204 13:24:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:29:34.204 13:24:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.204 13:24:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:34.204 ************************************ 00:29:34.204 START TEST bdev_verify 00:29:34.204 ************************************ 00:29:34.204 13:24:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:34.204 [2024-12-06 13:24:27.004306] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:34.204 [2024-12-06 13:24:27.004485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62211 ] 00:29:34.204 [2024-12-06 13:24:27.185877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:34.461 [2024-12-06 13:24:27.343541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.461 [2024-12-06 13:24:27.343551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.091 Running I/O for 5 seconds... 00:29:37.404 17856.00 IOPS, 69.75 MiB/s [2024-12-06T13:24:31.441Z] 18752.00 IOPS, 73.25 MiB/s [2024-12-06T13:24:32.819Z] 18645.33 IOPS, 72.83 MiB/s [2024-12-06T13:24:33.387Z] 17680.00 IOPS, 69.06 MiB/s [2024-12-06T13:24:33.387Z] 17241.60 IOPS, 67.35 MiB/s 00:29:40.287 Latency(us) 00:29:40.287 [2024-12-06T13:24:33.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.287 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0x0 length 0xbd0bd 00:29:40.287 Nvme0n1 : 5.07 1313.78 5.13 0.00 0.00 96931.78 19473.55 137812.85 00:29:40.287 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:29:40.287 Nvme0n1 : 5.09 1509.13 5.90 0.00 0.00 84601.10 18225.25 80890.15 00:29:40.287 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0x0 length 0xa0000 00:29:40.287 Nvme1n1 : 5.07 1313.13 5.13 0.00 0.00 96783.47 22594.32 128825.05 00:29:40.287 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0xa0000 length 0xa0000 00:29:40.287 Nvme1n1 : 5.09 1508.52 5.89 0.00 0.00 84466.31 18724.57 77894.22 00:29:40.287 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0x0 length 0x80000 00:29:40.287 Nvme2n1 : 5.10 1318.32 5.15 0.00 0.00 96258.80 12295.80 119837.26 00:29:40.287 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0x80000 length 0x80000 00:29:40.287 Nvme2n1 : 5.09 1507.95 5.89 0.00 0.00 84295.56 18350.08 76396.25 00:29:40.287 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0x0 length 0x80000 00:29:40.287 Nvme2n2 : 5.10 1316.47 5.14 0.00 0.00 96149.59 16976.94 124331.15 00:29:40.287 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0x80000 length 0x80000 00:29:40.287 Nvme2n2 : 5.09 1507.41 5.89 0.00 0.00 84183.54 18474.91 74898.29 00:29:40.287 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0x0 length 0x80000 00:29:40.287 Nvme2n3 : 5.11 1315.54 5.14 0.00 0.00 96019.51 17975.59 132819.63 00:29:40.287 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0x80000 length 0x80000 00:29:40.287 Nvme2n3 : 5.10 1506.68 5.89 0.00 0.00 84057.43 17601.10 77894.22 00:29:40.287 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0x0 length 0x20000 00:29:40.287 Nvme3n1 : 5.11 1314.89 5.14 0.00 0.00 95897.62 12108.56 138811.49 00:29:40.287 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:40.287 Verification LBA range: start 0x20000 length 0x20000 00:29:40.287 Nvme3n1 : 5.10 1504.77 5.88 0.00 0.00 83961.33 13856.18 82388.11 00:29:40.287 [2024-12-06T13:24:33.387Z] =================================================================================================================== 00:29:40.287 [2024-12-06T13:24:33.387Z] Total : 16936.59 66.16 0.00 0.00 89887.54 12108.56 138811.49 00:29:42.192 00:29:42.192 real 0m8.241s 00:29:42.192 user 0m15.053s 00:29:42.192 sys 0m0.432s 00:29:42.192 13:24:35 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.192 13:24:35 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:29:42.192 ************************************ 00:29:42.192 END TEST bdev_verify 00:29:42.192 ************************************ 00:29:42.192 13:24:35 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:42.192 13:24:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:29:42.192 13:24:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.192 13:24:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:42.192 ************************************ 00:29:42.192 START TEST bdev_verify_big_io 00:29:42.192 ************************************ 00:29:42.192 13:24:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:42.449 [2024-12-06 13:24:35.290884] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:42.449 [2024-12-06 13:24:35.291086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62320 ] 00:29:42.449 [2024-12-06 13:24:35.492368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:42.706 [2024-12-06 13:24:35.648569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.706 [2024-12-06 13:24:35.648611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.637 Running I/O for 5 seconds... 00:29:48.703 1724.00 IOPS, 107.75 MiB/s [2024-12-06T13:24:42.741Z] 3086.50 IOPS, 192.91 MiB/s [2024-12-06T13:24:42.741Z] 3612.00 IOPS, 225.75 MiB/s 00:29:49.641 Latency(us) 00:29:49.641 [2024-12-06T13:24:42.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.641 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0x0 length 0xbd0b 00:29:49.641 Nvme0n1 : 5.67 156.16 9.76 0.00 0.00 799743.07 27088.21 842855.38 00:29:49.641 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0xbd0b length 0xbd0b 00:29:49.641 Nvme0n1 : 5.53 141.87 8.87 0.00 0.00 851968.00 21970.16 834866.22 00:29:49.641 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0x0 length 0xa000 00:29:49.641 Nvme1n1 : 5.68 154.69 9.67 0.00 0.00 784921.24 73400.32 778942.17 00:29:49.641 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0xa000 length 0xa000 00:29:49.641 Nvme1n1 : 5.68 154.47 9.65 0.00 0.00 789399.21 46686.60 850844.53 00:29:49.641 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0x0 length 0x8000 00:29:49.641 Nvme2n1 : 5.68 154.46 9.65 0.00 0.00 766082.50 71403.03 790925.90 00:29:49.641 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0x8000 length 0x8000 00:29:49.641 Nvme2n1 : 5.69 154.25 9.64 0.00 0.00 769915.80 46936.26 866822.83 00:29:49.641 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0x0 length 0x8000 00:29:49.641 Nvme2n2 : 5.68 157.69 9.86 0.00 0.00 736670.75 60917.27 810898.77 00:29:49.641 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0x8000 length 0x8000 00:29:49.641 Nvme2n2 : 5.69 157.52 9.85 0.00 0.00 738144.83 20846.69 878806.55 00:29:49.641 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0x0 length 0x8000 00:29:49.641 Nvme2n3 : 5.74 159.47 9.97 0.00 0.00 707454.10 58670.32 826877.07 00:29:49.641 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0x8000 length 0x8000 00:29:49.641 Nvme2n3 : 5.74 160.14 10.01 0.00 0.00 705007.62 47685.24 890790.28 00:29:49.641 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0x0 length 0x2000 00:29:49.641 Nvme3n1 : 5.78 173.75 10.86 0.00 0.00 638191.33 14355.50 846849.95 00:29:49.641 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:49.641 Verification LBA range: start 0x2000 length 0x2000 00:29:49.641 Nvme3n1 : 5.80 176.60 11.04 0.00 0.00 625950.38 11359.57 1030600.41 00:29:49.641 [2024-12-06T13:24:42.741Z] =================================================================================================================== 00:29:49.641 [2024-12-06T13:24:42.741Z] Total : 1901.09 118.82 0.00 0.00 738812.58 11359.57 1030600.41 00:29:51.545 00:29:51.545 real 0m9.324s 00:29:51.545 user 0m17.149s 00:29:51.545 sys 0m0.491s 00:29:51.545 13:24:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.545 ************************************ 00:29:51.545 13:24:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:51.545 END TEST bdev_verify_big_io 00:29:51.545 ************************************ 00:29:51.545 13:24:44 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:51.545 13:24:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:51.545 13:24:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.545 13:24:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:51.545 ************************************ 00:29:51.545 START TEST bdev_write_zeroes 00:29:51.545 ************************************ 00:29:51.545 13:24:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:51.801 [2024-12-06 13:24:44.685608] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:51.801 [2024-12-06 13:24:44.685810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62435 ] 00:29:51.801 [2024-12-06 13:24:44.898196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.058 [2024-12-06 13:24:45.107011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.988 Running I/O for 1 seconds... 00:29:53.923 56832.00 IOPS, 222.00 MiB/s 00:29:53.923 Latency(us) 00:29:53.923 [2024-12-06T13:24:47.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.923 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:53.923 Nvme0n1 : 1.03 9386.65 36.67 0.00 0.00 13602.19 10360.93 29459.99 00:29:53.923 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:53.923 Nvme1n1 : 1.03 9372.45 36.61 0.00 0.00 13601.91 10548.18 29584.82 00:29:53.923 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:53.923 Nvme2n1 : 1.03 9358.46 36.56 0.00 0.00 13523.36 9986.44 26089.57 00:29:53.923 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:53.923 Nvme2n2 : 1.03 9344.49 36.50 0.00 0.00 13492.32 10236.10 25465.42 00:29:53.923 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:53.923 Nvme2n3 : 1.04 9330.40 36.45 0.00 0.00 13469.12 10236.10 25340.59 00:29:53.923 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:53.923 Nvme3n1 : 1.04 9316.45 36.39 0.00 0.00 13433.74 8238.81 27587.54 00:29:53.923 [2024-12-06T13:24:47.023Z] =================================================================================================================== 00:29:53.923 [2024-12-06T13:24:47.023Z] Total : 56108.90 219.18 0.00 0.00 13520.44 8238.81 29584.82 00:29:55.821 00:29:55.821 real 0m3.969s 00:29:55.821 user 0m3.427s 00:29:55.821 sys 0m0.418s 00:29:55.821 13:24:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.821 13:24:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:55.821 ************************************ 00:29:55.821 END TEST bdev_write_zeroes 00:29:55.821 ************************************ 00:29:55.821 13:24:48 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:55.821 13:24:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:55.821 13:24:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:55.821 13:24:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:55.821 ************************************ 00:29:55.821 START TEST bdev_json_nonenclosed 00:29:55.821 ************************************ 00:29:55.821 13:24:48 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:55.821 [2024-12-06 13:24:48.729547] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:55.821 [2024-12-06 13:24:48.729749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62499 ] 00:29:56.079 [2024-12-06 13:24:48.932153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.079 [2024-12-06 13:24:49.083122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.079 [2024-12-06 13:24:49.083228] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:56.079 [2024-12-06 13:24:49.083254] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:56.079 [2024-12-06 13:24:49.083268] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:56.339 00:29:56.339 real 0m0.786s 00:29:56.339 user 0m0.484s 00:29:56.339 sys 0m0.195s 00:29:56.339 13:24:49 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:56.339 13:24:49 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:56.339 ************************************ 00:29:56.339 END TEST bdev_json_nonenclosed 00:29:56.339 ************************************ 00:29:56.339 13:24:49 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:56.339 13:24:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:56.339 13:24:49 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:56.339 13:24:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:56.598 ************************************ 00:29:56.598 START TEST bdev_json_nonarray 00:29:56.598 ************************************ 00:29:56.598 13:24:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:56.598 [2024-12-06 13:24:49.535806] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:56.598 [2024-12-06 13:24:49.535964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62530 ] 00:29:56.857 [2024-12-06 13:24:49.715994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.857 [2024-12-06 13:24:49.867803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.857 [2024-12-06 13:24:49.867937] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:56.857 [2024-12-06 13:24:49.867963] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:56.857 [2024-12-06 13:24:49.867977] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:57.115 00:29:57.115 real 0m0.729s 00:29:57.115 user 0m0.464s 00:29:57.115 sys 0m0.160s 00:29:57.115 13:24:50 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.115 13:24:50 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:57.115 ************************************ 00:29:57.115 END TEST bdev_json_nonarray 00:29:57.115 ************************************ 00:29:57.115 13:24:50 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:29:57.115 13:24:50 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:29:57.115 13:24:50 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:29:57.115 13:24:50 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:29:57.115 13:24:50 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:29:57.375 13:24:50 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:57.375 13:24:50 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:57.375 13:24:50 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:29:57.375 13:24:50 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:29:57.375 13:24:50 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:29:57.375 13:24:50 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:29:57.375 00:29:57.375 real 0m48.856s 00:29:57.375 user 1m11.046s 00:29:57.375 sys 0m9.706s 00:29:57.375 13:24:50 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.375 ************************************ 00:29:57.375 END TEST blockdev_nvme 00:29:57.375 ************************************ 00:29:57.375 13:24:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:57.375 13:24:50 -- spdk/autotest.sh@209 -- # uname -s 00:29:57.375 13:24:50 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:29:57.375 13:24:50 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:57.375 13:24:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:57.375 13:24:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.375 13:24:50 -- common/autotest_common.sh@10 -- # set +x 00:29:57.375 ************************************ 00:29:57.375 START TEST blockdev_nvme_gpt 00:29:57.375 ************************************ 00:29:57.375 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:57.375 * Looking for test storage... 00:29:57.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:57.375 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:57.375 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:29:57.375 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:57.375 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.375 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.635 13:24:50 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:29:57.635 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.635 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:57.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.635 --rc genhtml_branch_coverage=1 00:29:57.635 --rc genhtml_function_coverage=1 00:29:57.635 --rc genhtml_legend=1 00:29:57.635 --rc geninfo_all_blocks=1 00:29:57.635 --rc geninfo_unexecuted_blocks=1 00:29:57.635 00:29:57.635 ' 00:29:57.635 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:57.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.635 --rc genhtml_branch_coverage=1 00:29:57.635 --rc genhtml_function_coverage=1 00:29:57.635 --rc genhtml_legend=1 00:29:57.635 --rc geninfo_all_blocks=1 00:29:57.635 --rc geninfo_unexecuted_blocks=1 00:29:57.635 00:29:57.635 ' 00:29:57.635 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:57.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.635 --rc genhtml_branch_coverage=1 00:29:57.635 --rc genhtml_function_coverage=1 00:29:57.635 --rc genhtml_legend=1 00:29:57.635 --rc geninfo_all_blocks=1 00:29:57.635 --rc geninfo_unexecuted_blocks=1 00:29:57.635 00:29:57.635 ' 00:29:57.635 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:57.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.635 --rc genhtml_branch_coverage=1 00:29:57.635 --rc genhtml_function_coverage=1 00:29:57.635 --rc genhtml_legend=1 00:29:57.635 --rc geninfo_all_blocks=1 00:29:57.635 --rc geninfo_unexecuted_blocks=1 00:29:57.635 00:29:57.635 ' 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62614 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:57.635 13:24:50 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62614 00:29:57.635 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62614 ']' 00:29:57.635 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.635 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.635 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.635 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.635 13:24:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:57.635 [2024-12-06 13:24:50.655583] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:29:57.635 [2024-12-06 13:24:50.655786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62614 ] 00:29:57.894 [2024-12-06 13:24:50.856381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.153 [2024-12-06 13:24:51.003301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.088 13:24:52 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.088 13:24:52 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:29:59.088 13:24:52 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:29:59.088 13:24:52 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:29:59.088 13:24:52 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:59.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:59.950 Waiting for block devices as requested 00:29:59.951 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:59.951 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:00.210 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:00.210 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:05.481 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:05.481 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:05.481 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:30:05.482 13:24:58 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:30:05.482 BYT; 00:30:05.482 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:30:05.482 BYT; 00:30:05.482 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:05.482 13:24:58 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:30:05.482 13:24:58 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:30:06.439 The operation has completed successfully. 00:30:06.439 13:24:59 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:30:07.374 The operation has completed successfully. 00:30:07.374 13:25:00 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:08.309 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:08.877 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:08.877 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:30:08.877 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:08.877 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:30:08.877 13:25:01 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:30:08.877 13:25:01 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.877 13:25:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:09.136 [] 00:30:09.136 13:25:01 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.136 13:25:01 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:30:09.136 13:25:01 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:30:09.136 13:25:01 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:30:09.136 13:25:01 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:09.136 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:30:09.136 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.136 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.394 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.394 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:30:09.394 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.394 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.394 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.394 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:30:09.394 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:30:09.394 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.394 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:09.653 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.654 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:30:09.654 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:30:09.654 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9720a5b4-54bc-4569-9dc0-a84ff1d9c89d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9720a5b4-54bc-4569-9dc0-a84ff1d9c89d",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "53ffbc68-0272-4b1a-8696-8cce54ad7d83"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "53ffbc68-0272-4b1a-8696-8cce54ad7d83",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b7ddff5b-90dc-4e1c-93a4-28f1d3b1d428"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b7ddff5b-90dc-4e1c-93a4-28f1d3b1d428",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7e757616-a7d8-48f9-b37c-bdd5a547e6ba"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7e757616-a7d8-48f9-b37c-bdd5a547e6ba",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0382fde3-9542-42ce-bb1a-81179ec6ba7d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0382fde3-9542-42ce-bb1a-81179ec6ba7d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:30:09.654 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:30:09.654 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:30:09.654 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:30:09.654 13:25:02 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62614 00:30:09.654 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62614 ']' 00:30:09.654 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62614 00:30:09.654 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:30:09.654 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:09.654 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62614 00:30:09.654 killing process with pid 62614 00:30:09.654 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:09.654 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:09.654 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62614' 00:30:09.654 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62614 00:30:09.654 13:25:02 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62614 00:30:12.943 13:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:12.943 13:25:05 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:30:12.943 13:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:30:12.943 13:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.943 13:25:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:12.943 ************************************ 00:30:12.943 START TEST bdev_hello_world 00:30:12.943 ************************************ 00:30:12.943 13:25:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:30:12.943 [2024-12-06 13:25:05.673678] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:30:12.943 [2024-12-06 13:25:05.673881] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63269 ] 00:30:12.943 [2024-12-06 13:25:05.877264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.943 [2024-12-06 13:25:06.033339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.884 [2024-12-06 13:25:06.810322] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:13.884 [2024-12-06 13:25:06.810503] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:30:13.884 [2024-12-06 13:25:06.810549] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:13.884 [2024-12-06 13:25:06.814478] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:13.884 [2024-12-06 13:25:06.814989] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:13.884 [2024-12-06 13:25:06.815027] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:13.884 [2024-12-06 13:25:06.815276] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:13.884 00:30:13.884 [2024-12-06 13:25:06.815309] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:15.260 00:30:15.260 real 0m2.627s 00:30:15.260 user 0m2.131s 00:30:15.260 sys 0m0.383s 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.260 ************************************ 00:30:15.260 END TEST bdev_hello_world 00:30:15.260 ************************************ 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:30:15.260 13:25:08 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:30:15.260 13:25:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:15.260 13:25:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:15.260 13:25:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:15.260 ************************************ 00:30:15.260 START TEST bdev_bounds 00:30:15.260 ************************************ 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63317 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:15.260 Process bdevio pid: 63317 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63317' 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63317 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63317 ']' 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.260 13:25:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:15.518 [2024-12-06 13:25:08.361145] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:30:15.518 [2024-12-06 13:25:08.361658] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63317 ] 00:30:15.518 [2024-12-06 13:25:08.560594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:15.775 [2024-12-06 13:25:08.719051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.776 [2024-12-06 13:25:08.719262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.776 [2024-12-06 13:25:08.719285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.712 13:25:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.712 13:25:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:30:16.712 13:25:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:16.712 I/O targets: 00:30:16.712 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:30:16.712 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:30:16.712 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:30:16.712 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:30:16.712 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:30:16.712 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:30:16.713 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:30:16.713 00:30:16.713 00:30:16.713 CUnit - A unit testing framework for C - Version 2.1-3 00:30:16.713 http://cunit.sourceforge.net/ 00:30:16.713 00:30:16.713 00:30:16.713 Suite: bdevio tests on: Nvme3n1 00:30:16.713 Test: blockdev write read block ...passed 00:30:16.713 Test: blockdev write zeroes read block ...passed 00:30:16.713 Test: blockdev write zeroes read no split ...passed 00:30:16.713 Test: blockdev write zeroes read split ...passed 00:30:16.713 Test: blockdev write zeroes read split partial ...passed 00:30:16.713 Test: blockdev reset ...[2024-12-06 13:25:09.741153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:30:16.713 [2024-12-06 13:25:09.745859] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:30:16.713 passed 00:30:16.713 Test: blockdev write read 8 blocks ...passed 00:30:16.713 Test: blockdev write read size > 128k ...passed 00:30:16.713 Test: blockdev write read invalid size ...passed 00:30:16.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:16.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:16.713 Test: blockdev write read max offset ...passed 00:30:16.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:16.713 Test: blockdev writev readv 8 blocks ...passed 00:30:16.713 Test: blockdev writev readv 30 x 1block ...passed 00:30:16.713 Test: blockdev writev readv block ...passed 00:30:16.713 Test: blockdev writev readv size > 128k ...passed 00:30:16.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:16.713 Test: blockdev comparev and writev ...[2024-12-06 13:25:09.756358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aca04000 len:0x1000 00:30:16.713 [2024-12-06 13:25:09.756557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:16.713 passed 00:30:16.713 Test: blockdev nvme passthru rw ...passed 00:30:16.713 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:25:09.757480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:16.713 passed 00:30:16.713 Test: blockdev nvme admin passthru ...[2024-12-06 13:25:09.757706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:16.713 passed 00:30:16.713 Test: blockdev copy ...passed 00:30:16.713 Suite: bdevio tests on: Nvme2n3 00:30:16.713 Test: blockdev write read block ...passed 00:30:16.713 Test: blockdev write zeroes read block ...passed 00:30:16.713 Test: blockdev write zeroes read no split ...passed 00:30:16.713 Test: blockdev write zeroes read split ...passed 00:30:16.980 Test: blockdev write zeroes read split partial ...passed 00:30:16.980 Test: blockdev reset ...[2024-12-06 13:25:09.841768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:30:16.980 [2024-12-06 13:25:09.846725] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:30:16.980 passed 00:30:16.980 Test: blockdev write read 8 blocks ...passed 00:30:16.980 Test: blockdev write read size > 128k ...passed 00:30:16.980 Test: blockdev write read invalid size ...passed 00:30:16.980 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:16.980 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:16.980 Test: blockdev write read max offset ...passed 00:30:16.980 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:16.980 Test: blockdev writev readv 8 blocks ...passed 00:30:16.980 Test: blockdev writev readv 30 x 1block ...passed 00:30:16.980 Test: blockdev writev readv block ...passed 00:30:16.980 Test: blockdev writev readv size > 128k ...passed 00:30:16.980 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:16.980 Test: blockdev comparev and writev ...[2024-12-06 13:25:09.857321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aca02000 len:0x1000 00:30:16.980 [2024-12-06 13:25:09.857876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:16.980 passed 00:30:16.980 Test: blockdev nvme passthru rw ...passed 00:30:16.980 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:25:09.859332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:16.980 passed 00:30:16.980 Test: blockdev nvme admin passthru ...passed 00:30:16.980 Test: blockdev copy ...[2024-12-06 13:25:09.859625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:16.980 passed 00:30:16.980 Suite: bdevio tests on: Nvme2n2 00:30:16.980 Test: blockdev write read block ...passed 00:30:16.980 Test: blockdev write zeroes read block ...passed 00:30:16.980 Test: blockdev write zeroes read no split ...passed 00:30:16.980 Test: blockdev write zeroes read split ...passed 00:30:16.980 Test: blockdev write zeroes read split partial ...passed 00:30:16.980 Test: blockdev reset ...[2024-12-06 13:25:09.954577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:30:16.980 [2024-12-06 13:25:09.959924] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:30:16.980 passed 00:30:16.980 Test: blockdev write read 8 blocks ...passed 00:30:16.980 Test: blockdev write read size > 128k ...passed 00:30:16.980 Test: blockdev write read invalid size ...passed 00:30:16.980 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:16.980 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:16.980 Test: blockdev write read max offset ...passed 00:30:16.980 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:16.980 Test: blockdev writev readv 8 blocks ...passed 00:30:16.980 Test: blockdev writev readv 30 x 1block ...passed 00:30:16.980 Test: blockdev writev readv block ...passed 00:30:16.980 Test: blockdev writev readv size > 128k ...passed 00:30:16.980 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:16.980 Test: blockdev comparev and writev ...[2024-12-06 13:25:09.970737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0838000 len:0x1000 00:30:16.980 passed 00:30:16.980 Test: blockdev nvme passthru rw ...[2024-12-06 13:25:09.971183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:16.980 passed 00:30:16.980 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:25:09.972115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:16.980 passed 00:30:16.980 Test: blockdev nvme admin passthru ...[2024-12-06 13:25:09.972332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:16.980 passed 00:30:16.980 Test: blockdev copy ...passed 00:30:16.980 Suite: bdevio tests on: Nvme2n1 00:30:16.980 Test: blockdev write read block ...passed 00:30:16.980 Test: blockdev write zeroes read block ...passed 00:30:16.980 Test: blockdev write zeroes read no split ...passed 00:30:16.980 Test: blockdev write zeroes read split ...passed 00:30:16.980 Test: blockdev write zeroes read split partial ...passed 00:30:16.980 Test: blockdev reset ...[2024-12-06 13:25:10.064053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:30:16.980 passed 00:30:16.980 Test: blockdev write read 8 blocks ...[2024-12-06 13:25:10.069078] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:30:16.980 passed 00:30:16.980 Test: blockdev write read size > 128k ...passed 00:30:16.980 Test: blockdev write read invalid size ...passed 00:30:16.980 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:16.980 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:16.980 Test: blockdev write read max offset ...passed 00:30:16.980 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:16.980 Test: blockdev writev readv 8 blocks ...passed 00:30:16.980 Test: blockdev writev readv 30 x 1block ...passed 00:30:16.980 Test: blockdev writev readv block ...passed 00:30:16.980 Test: blockdev writev readv size > 128k ...passed 00:30:16.980 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:17.238 Test: blockdev comparev and writev ...[2024-12-06 13:25:10.078024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0834000 len:0x1000 00:30:17.238 [2024-12-06 13:25:10.078123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:17.238 passed 00:30:17.238 Test: blockdev nvme passthru rw ...passed 00:30:17.238 Test: blockdev nvme passthru vendor specific ...passed 00:30:17.238 Test: blockdev nvme admin passthru ...[2024-12-06 13:25:10.078796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:17.238 [2024-12-06 13:25:10.078836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:17.238 passed 00:30:17.238 Test: blockdev copy ...passed 00:30:17.238 Suite: bdevio tests on: Nvme1n1p2 00:30:17.238 Test: blockdev write read block ...passed 00:30:17.238 Test: blockdev write zeroes read block ...passed 00:30:17.238 Test: blockdev write zeroes read no split ...passed 00:30:17.238 Test: blockdev write zeroes read split ...passed 00:30:17.238 Test: blockdev write zeroes read split partial ...passed 00:30:17.238 Test: blockdev reset ...[2024-12-06 13:25:10.174116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:30:17.239 [2024-12-06 13:25:10.178880] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:30:17.239 00:30:17.239 Test: blockdev write read 8 blocks ...passed 00:30:17.239 Test: blockdev write read size > 128k ...passed 00:30:17.239 Test: blockdev write read invalid size ...passed 00:30:17.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:17.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:17.239 Test: blockdev write read max offset ...passed 00:30:17.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:17.239 Test: blockdev writev readv 8 blocks ...passed 00:30:17.239 Test: blockdev writev readv 30 x 1block ...passed 00:30:17.239 Test: blockdev writev readv block ...passed 00:30:17.239 Test: blockdev writev readv size > 128k ...passed 00:30:17.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:17.239 Test: blockdev comparev and writev ...[2024-12-06 13:25:10.189550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c0830000 len:0x1000 00:30:17.239 [2024-12-06 13:25:10.189799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:17.239 passed 00:30:17.239 Test: blockdev nvme passthru rw ...passed 00:30:17.239 Test: blockdev nvme passthru vendor specific ...passed 00:30:17.239 Test: blockdev nvme admin passthru ...passed 00:30:17.239 Test: blockdev copy ...passed 00:30:17.239 Suite: bdevio tests on: Nvme1n1p1 00:30:17.239 Test: blockdev write read block ...passed 00:30:17.239 Test: blockdev write zeroes read block ...passed 00:30:17.239 Test: blockdev write zeroes read no split ...passed 00:30:17.239 Test: blockdev write zeroes read split ...passed 00:30:17.239 Test: blockdev write zeroes read split partial ...passed 00:30:17.239 Test: blockdev reset ...[2024-12-06 13:25:10.279101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:30:17.239 [2024-12-06 13:25:10.283755] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:30:17.239 Test: blockdev write read 8 blocks ...uccessful. 00:30:17.239 passed 00:30:17.239 Test: blockdev write read size > 128k ...passed 00:30:17.239 Test: blockdev write read invalid size ...passed 00:30:17.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:17.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:17.239 Test: blockdev write read max offset ...passed 00:30:17.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:17.239 Test: blockdev writev readv 8 blocks ...passed 00:30:17.239 Test: blockdev writev readv 30 x 1block ...passed 00:30:17.239 Test: blockdev writev readv block ...passed 00:30:17.239 Test: blockdev writev readv size > 128k ...passed 00:30:17.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:17.239 Test: blockdev comparev and writev ...[2024-12-06 13:25:10.293165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2acc0e000 len:0x1000 00:30:17.239 [2024-12-06 13:25:10.293257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:17.239 passed 00:30:17.239 Test: blockdev nvme passthru rw ...passed 00:30:17.239 Test: blockdev nvme passthru vendor specific ...passed 00:30:17.239 Test: blockdev nvme admin passthru ...passed 00:30:17.239 Test: blockdev copy ...passed 00:30:17.239 Suite: bdevio tests on: Nvme0n1 00:30:17.239 Test: blockdev write read block ...passed 00:30:17.239 Test: blockdev write zeroes read block ...passed 00:30:17.239 Test: blockdev write zeroes read no split ...passed 00:30:17.497 Test: blockdev write zeroes read split ...passed 00:30:17.497 Test: blockdev write zeroes read split partial ...passed 00:30:17.497 Test: blockdev reset ...[2024-12-06 13:25:10.379818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:30:17.497 [2024-12-06 13:25:10.384373] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:30:17.497 passed 00:30:17.497 Test: blockdev write read 8 blocks ...passed 00:30:17.497 Test: blockdev write read size > 128k ...passed 00:30:17.497 Test: blockdev write read invalid size ...passed 00:30:17.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:17.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:17.497 Test: blockdev write read max offset ...passed 00:30:17.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:17.497 Test: blockdev writev readv 8 blocks ...passed 00:30:17.497 Test: blockdev writev readv 30 x 1block ...passed 00:30:17.497 Test: blockdev writev readv block ...passed 00:30:17.497 Test: blockdev writev readv size > 128k ...passed 00:30:17.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:17.497 Test: blockdev comparev and writev ...passed 00:30:17.497 Test: blockdev nvme passthru rw ...[2024-12-06 13:25:10.392227] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:30:17.497 separate metadata which is not supported yet. 00:30:17.497 passed 00:30:17.497 Test: blockdev nvme passthru vendor specific ...passed 00:30:17.497 Test: blockdev nvme admin passthru ...[2024-12-06 13:25:10.392737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:30:17.497 [2024-12-06 13:25:10.392797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:30:17.497 passed 00:30:17.497 Test: blockdev copy ...passed 00:30:17.497 00:30:17.498 Run Summary: Type Total Ran Passed Failed Inactive 00:30:17.498 suites 7 7 n/a 0 0 00:30:17.498 tests 161 161 161 0 0 00:30:17.498 asserts 1025 1025 1025 0 n/a 00:30:17.498 00:30:17.498 Elapsed time = 2.037 seconds 00:30:17.498 0 00:30:17.498 13:25:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63317 00:30:17.498 13:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63317 ']' 00:30:17.498 13:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63317 00:30:17.498 13:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:30:17.498 13:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:17.498 13:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63317 00:30:17.498 killing process with pid 63317 00:30:17.498 13:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:17.498 13:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:17.498 13:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63317' 00:30:17.498 13:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63317 00:30:17.498 13:25:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63317 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:30:18.877 00:30:18.877 real 0m3.516s 00:30:18.877 user 0m8.987s 00:30:18.877 sys 0m0.605s 00:30:18.877 ************************************ 00:30:18.877 END TEST bdev_bounds 00:30:18.877 ************************************ 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:18.877 13:25:11 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:30:18.877 13:25:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:18.877 13:25:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.877 13:25:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:18.877 ************************************ 00:30:18.877 START TEST bdev_nbd 00:30:18.877 ************************************ 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63388 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63388 /var/tmp/spdk-nbd.sock 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63388 ']' 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:18.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.877 13:25:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:18.877 [2024-12-06 13:25:11.961459] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:30:18.877 [2024-12-06 13:25:11.961644] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:19.135 [2024-12-06 13:25:12.175393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.394 [2024-12-06 13:25:12.383922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:30:20.332 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:20.591 1+0 records in 00:30:20.591 1+0 records out 00:30:20.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695879 s, 5.9 MB/s 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:30:20.591 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:20.850 1+0 records in 00:30:20.850 1+0 records out 00:30:20.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685201 s, 6.0 MB/s 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:20.850 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:20.851 13:25:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:20.851 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:20.851 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:30:20.851 13:25:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:30:21.109 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:30:21.109 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:30:21.109 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:30:21.109 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:30:21.109 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:21.109 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:21.109 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:21.109 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:21.368 1+0 records in 00:30:21.368 1+0 records out 00:30:21.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000944287 s, 4.3 MB/s 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:30:21.368 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:21.626 1+0 records in 00:30:21.626 1+0 records out 00:30:21.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604622 s, 6.8 MB/s 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:30:21.626 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:21.884 1+0 records in 00:30:21.884 1+0 records out 00:30:21.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000843318 s, 4.9 MB/s 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:30:21.884 13:25:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:22.143 1+0 records in 00:30:22.143 1+0 records out 00:30:22.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601823 s, 6.8 MB/s 00:30:22.143 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:22.414 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:22.414 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:22.414 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:22.414 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:22.414 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:22.414 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:30:22.414 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:30:22.672 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:30:22.672 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:30:22.672 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:22.673 1+0 records in 00:30:22.673 1+0 records out 00:30:22.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787606 s, 5.2 MB/s 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:30:22.673 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:22.932 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd0", 00:30:22.932 "bdev_name": "Nvme0n1" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd1", 00:30:22.932 "bdev_name": "Nvme1n1p1" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd2", 00:30:22.932 "bdev_name": "Nvme1n1p2" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd3", 00:30:22.932 "bdev_name": "Nvme2n1" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd4", 00:30:22.932 "bdev_name": "Nvme2n2" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd5", 00:30:22.932 "bdev_name": "Nvme2n3" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd6", 00:30:22.932 "bdev_name": "Nvme3n1" 00:30:22.932 } 00:30:22.932 ]' 00:30:22.932 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:22.932 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:22.932 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd0", 00:30:22.932 "bdev_name": "Nvme0n1" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd1", 00:30:22.932 "bdev_name": "Nvme1n1p1" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd2", 00:30:22.932 "bdev_name": "Nvme1n1p2" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd3", 00:30:22.932 "bdev_name": "Nvme2n1" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd4", 00:30:22.932 "bdev_name": "Nvme2n2" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd5", 00:30:22.932 "bdev_name": "Nvme2n3" 00:30:22.932 }, 00:30:22.932 { 00:30:22.932 "nbd_device": "/dev/nbd6", 00:30:22.932 "bdev_name": "Nvme3n1" 00:30:22.932 } 00:30:22.932 ]' 00:30:22.932 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:30:22.932 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:22.932 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:30:22.932 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:22.932 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:22.932 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:22.932 13:25:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:23.192 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:23.192 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:23.192 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:23.192 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:23.192 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:23.192 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:23.192 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:23.192 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:23.192 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:23.192 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:23.450 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:23.450 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:23.450 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:23.450 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:23.450 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:23.451 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:23.451 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:23.451 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:23.451 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:23.451 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:30:23.709 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:30:23.709 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:30:23.709 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:30:23.709 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:23.709 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:23.709 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:30:23.968 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:23.968 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:23.968 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:23.968 13:25:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:30:23.968 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:30:23.968 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:30:23.968 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:30:23.968 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:23.968 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:23.968 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:30:23.968 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:23.968 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:23.968 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:23.968 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:30:24.226 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:30:24.226 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:30:24.226 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:30:24.226 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:24.226 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:24.226 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:30:24.226 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:24.226 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:24.226 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:24.226 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:30:24.485 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:30:24.485 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:30:24.485 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:30:24.485 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:24.485 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:24.485 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:30:24.485 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:24.485 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:24.485 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:24.485 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:30:24.744 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:30:24.744 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:30:24.744 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:30:24.744 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:24.744 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:24.744 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:30:24.744 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:24.744 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:24.744 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:24.744 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:24.744 13:25:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:25.311 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:25.312 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:30:25.572 /dev/nbd0 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:25.572 1+0 records in 00:30:25.572 1+0 records out 00:30:25.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549652 s, 7.5 MB/s 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:25.572 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:30:25.831 /dev/nbd1 00:30:25.831 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:25.832 1+0 records in 00:30:25.832 1+0 records out 00:30:25.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708224 s, 5.8 MB/s 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:25.832 13:25:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:30:26.091 /dev/nbd10 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:26.091 1+0 records in 00:30:26.091 1+0 records out 00:30:26.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694281 s, 5.9 MB/s 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:26.091 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:30:26.351 /dev/nbd11 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:26.351 1+0 records in 00:30:26.351 1+0 records out 00:30:26.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106868 s, 3.8 MB/s 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:26.351 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:30:26.610 /dev/nbd12 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:26.610 1+0 records in 00:30:26.610 1+0 records out 00:30:26.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111468 s, 3.7 MB/s 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:26.610 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.892 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:26.892 13:25:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:26.892 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:26.892 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:26.893 13:25:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:30:27.196 /dev/nbd13 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:27.196 1+0 records in 00:30:27.196 1+0 records out 00:30:27.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741393 s, 5.5 MB/s 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:27.196 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:30:27.454 /dev/nbd14 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:27.454 1+0 records in 00:30:27.454 1+0 records out 00:30:27.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767227 s, 5.3 MB/s 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:27.454 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:27.712 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:27.712 { 00:30:27.712 "nbd_device": "/dev/nbd0", 00:30:27.712 "bdev_name": "Nvme0n1" 00:30:27.712 }, 00:30:27.712 { 00:30:27.712 "nbd_device": "/dev/nbd1", 00:30:27.712 "bdev_name": "Nvme1n1p1" 00:30:27.712 }, 00:30:27.712 { 00:30:27.712 "nbd_device": "/dev/nbd10", 00:30:27.712 "bdev_name": "Nvme1n1p2" 00:30:27.712 }, 00:30:27.712 { 00:30:27.712 "nbd_device": "/dev/nbd11", 00:30:27.712 "bdev_name": "Nvme2n1" 00:30:27.712 }, 00:30:27.712 { 00:30:27.712 "nbd_device": "/dev/nbd12", 00:30:27.712 "bdev_name": "Nvme2n2" 00:30:27.712 }, 00:30:27.712 { 00:30:27.713 "nbd_device": "/dev/nbd13", 00:30:27.713 "bdev_name": "Nvme2n3" 00:30:27.713 }, 00:30:27.713 { 00:30:27.713 "nbd_device": "/dev/nbd14", 00:30:27.713 "bdev_name": "Nvme3n1" 00:30:27.713 } 00:30:27.713 ]' 00:30:27.713 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:27.713 { 00:30:27.713 "nbd_device": "/dev/nbd0", 00:30:27.713 "bdev_name": "Nvme0n1" 00:30:27.713 }, 00:30:27.713 { 00:30:27.713 "nbd_device": "/dev/nbd1", 00:30:27.713 "bdev_name": "Nvme1n1p1" 00:30:27.713 }, 00:30:27.713 { 00:30:27.713 "nbd_device": "/dev/nbd10", 00:30:27.713 "bdev_name": "Nvme1n1p2" 00:30:27.713 }, 00:30:27.713 { 00:30:27.713 "nbd_device": "/dev/nbd11", 00:30:27.713 "bdev_name": "Nvme2n1" 00:30:27.713 }, 00:30:27.713 { 00:30:27.713 "nbd_device": "/dev/nbd12", 00:30:27.713 "bdev_name": "Nvme2n2" 00:30:27.713 }, 00:30:27.713 { 00:30:27.713 "nbd_device": "/dev/nbd13", 00:30:27.713 "bdev_name": "Nvme2n3" 00:30:27.713 }, 00:30:27.713 { 00:30:27.713 "nbd_device": "/dev/nbd14", 00:30:27.713 "bdev_name": "Nvme3n1" 00:30:27.713 } 00:30:27.713 ]' 00:30:27.713 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:27.971 /dev/nbd1 00:30:27.971 /dev/nbd10 00:30:27.971 /dev/nbd11 00:30:27.971 /dev/nbd12 00:30:27.971 /dev/nbd13 00:30:27.971 /dev/nbd14' 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:27.971 /dev/nbd1 00:30:27.971 /dev/nbd10 00:30:27.971 /dev/nbd11 00:30:27.971 /dev/nbd12 00:30:27.971 /dev/nbd13 00:30:27.971 /dev/nbd14' 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:27.971 256+0 records in 00:30:27.971 256+0 records out 00:30:27.971 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115983 s, 90.4 MB/s 00:30:27.971 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:27.972 13:25:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:27.972 256+0 records in 00:30:27.972 256+0 records out 00:30:27.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191748 s, 5.5 MB/s 00:30:27.972 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:27.972 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:28.229 256+0 records in 00:30:28.229 256+0 records out 00:30:28.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169633 s, 6.2 MB/s 00:30:28.229 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:28.229 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:30:28.486 256+0 records in 00:30:28.486 256+0 records out 00:30:28.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165871 s, 6.3 MB/s 00:30:28.486 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:28.486 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:30:28.486 256+0 records in 00:30:28.486 256+0 records out 00:30:28.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172363 s, 6.1 MB/s 00:30:28.744 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:28.744 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:30:28.744 256+0 records in 00:30:28.744 256+0 records out 00:30:28.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16416 s, 6.4 MB/s 00:30:28.744 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:28.744 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:30:29.002 256+0 records in 00:30:29.002 256+0 records out 00:30:29.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157261 s, 6.7 MB/s 00:30:29.002 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:29.002 13:25:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:30:29.002 256+0 records in 00:30:29.002 256+0 records out 00:30:29.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169601 s, 6.2 MB/s 00:30:29.002 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:30:29.002 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:30:29.002 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:29.002 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:29.002 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:29.002 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:29.002 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:29.002 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:29.002 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:29.261 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:29.520 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:29.520 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:29.520 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:29.520 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:29.520 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:29.520 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:29.520 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:29.520 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:29.520 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:29.520 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:29.779 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:29.779 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:29.779 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:29.779 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:29.779 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:29.779 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:29.779 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:29.779 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:29.779 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:29.779 13:25:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:30:30.037 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:30:30.037 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:30:30.037 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:30:30.037 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:30.037 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:30.037 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:30:30.037 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:30.037 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:30.037 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:30.037 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:30:30.295 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:30:30.295 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:30:30.295 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:30:30.295 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:30.295 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:30.295 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:30:30.295 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:30.295 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:30.295 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:30.295 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:30:30.553 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:30:30.553 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:30:30.553 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:30:30.553 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:30.553 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:30.553 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:30:30.553 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:30.553 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:30.553 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:30.553 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:30:30.812 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:30:30.812 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:30:30.812 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:30:30.812 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:30.812 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:30.812 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:30:30.812 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:30.812 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:30.812 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:30.813 13:25:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:30:31.070 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:30:31.070 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:30:31.070 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:30:31.070 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:31.070 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:31.070 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:30:31.070 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:31.070 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:31.070 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:31.070 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:31.070 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:31.328 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:31.328 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:31.328 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:30:31.585 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:31.843 malloc_lvol_verify 00:30:31.843 13:25:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:32.120 73e0efde-ee0d-456a-8d8c-7f52926b1c2a 00:30:32.120 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:32.377 aca538bb-b6e0-4b87-b3d4-c7d0c90fa01d 00:30:32.377 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:32.634 /dev/nbd0 00:30:32.634 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:30:32.634 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:30:32.634 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:30:32.634 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:30:32.634 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:30:32.634 mke2fs 1.47.0 (5-Feb-2023) 00:30:32.634 Discarding device blocks: 0/4096 done 00:30:32.635 Creating filesystem with 4096 1k blocks and 1024 inodes 00:30:32.635 00:30:32.635 Allocating group tables: 0/1 done 00:30:32.635 Writing inode tables: 0/1 done 00:30:32.635 Creating journal (1024 blocks): done 00:30:32.635 Writing superblocks and filesystem accounting information: 0/1 done 00:30:32.635 00:30:32.635 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:32.635 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:32.635 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:32.635 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:32.635 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:30:32.635 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:32.635 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63388 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63388 ']' 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63388 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63388 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63388' 00:30:32.892 killing process with pid 63388 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63388 00:30:32.892 13:25:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63388 00:30:34.789 13:25:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:30:34.789 00:30:34.789 real 0m15.557s 00:30:34.789 user 0m20.319s 00:30:34.789 sys 0m6.570s 00:30:34.789 13:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:34.789 ************************************ 00:30:34.789 END TEST bdev_nbd 00:30:34.789 ************************************ 00:30:34.789 13:25:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:34.789 13:25:27 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:30:34.789 13:25:27 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:30:34.789 13:25:27 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:30:34.789 skipping fio tests on NVMe due to multi-ns failures. 00:30:34.789 13:25:27 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:34.789 13:25:27 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:34.789 13:25:27 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:34.789 13:25:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:30:34.789 13:25:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:34.789 13:25:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:34.789 ************************************ 00:30:34.789 START TEST bdev_verify 00:30:34.789 ************************************ 00:30:34.789 13:25:27 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:34.789 [2024-12-06 13:25:27.561852] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:30:34.789 [2024-12-06 13:25:27.562133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63851 ] 00:30:34.789 [2024-12-06 13:25:27.773594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:35.048 [2024-12-06 13:25:28.001240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.048 [2024-12-06 13:25:28.001265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.980 Running I/O for 5 seconds... 00:30:38.290 16448.00 IOPS, 64.25 MiB/s [2024-12-06T13:25:32.325Z] 17824.00 IOPS, 69.62 MiB/s [2024-12-06T13:25:33.259Z] 17664.00 IOPS, 69.00 MiB/s [2024-12-06T13:25:34.196Z] 17392.00 IOPS, 67.94 MiB/s [2024-12-06T13:25:34.196Z] 17075.20 IOPS, 66.70 MiB/s 00:30:41.096 Latency(us) 00:30:41.096 [2024-12-06T13:25:34.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.096 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x0 length 0xbd0bd 00:30:41.096 Nvme0n1 : 5.08 1246.85 4.87 0.00 0.00 102040.55 13232.03 95869.81 00:30:41.096 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:30:41.096 Nvme0n1 : 5.06 1137.61 4.44 0.00 0.00 112088.11 25715.08 101362.35 00:30:41.096 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x0 length 0x4ff80 00:30:41.096 Nvme1n1p1 : 5.08 1246.17 4.87 0.00 0.00 101863.27 13606.52 88379.98 00:30:41.096 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x4ff80 length 0x4ff80 00:30:41.096 Nvme1n1p1 : 5.06 1137.22 4.44 0.00 0.00 111875.38 24466.77 98865.74 00:30:41.096 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x0 length 0x4ff7f 00:30:41.096 Nvme1n1p2 : 5.11 1253.38 4.90 0.00 0.00 101437.13 16477.62 82388.11 00:30:41.096 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:30:41.096 Nvme1n1p2 : 5.07 1136.82 4.44 0.00 0.00 111635.04 22219.82 95370.48 00:30:41.096 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x0 length 0x80000 00:30:41.096 Nvme2n1 : 5.11 1252.91 4.89 0.00 0.00 101236.87 15978.30 80890.15 00:30:41.096 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x80000 length 0x80000 00:30:41.096 Nvme2n1 : 5.09 1145.33 4.47 0.00 0.00 110633.38 5523.75 94871.16 00:30:41.096 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x0 length 0x80000 00:30:41.096 Nvme2n2 : 5.11 1252.45 4.89 0.00 0.00 101035.49 15978.30 83386.76 00:30:41.096 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x80000 length 0x80000 00:30:41.096 Nvme2n2 : 5.09 1144.75 4.47 0.00 0.00 110423.67 6147.90 95370.48 00:30:41.096 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x0 length 0x80000 00:30:41.096 Nvme2n3 : 5.11 1251.99 4.89 0.00 0.00 100809.55 16352.79 85883.37 00:30:41.096 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x80000 length 0x80000 00:30:41.096 Nvme2n3 : 5.10 1154.02 4.51 0.00 0.00 109465.46 11047.50 95370.48 00:30:41.096 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x0 length 0x20000 00:30:41.096 Nvme3n1 : 5.11 1251.49 4.89 0.00 0.00 100624.36 13294.45 89877.94 00:30:41.096 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:41.096 Verification LBA range: start 0x20000 length 0x20000 00:30:41.096 Nvme3n1 : 5.10 1153.67 4.51 0.00 0.00 109271.13 10610.59 100363.70 00:30:41.096 [2024-12-06T13:25:34.196Z] =================================================================================================================== 00:30:41.096 [2024-12-06T13:25:34.196Z] Total : 16764.67 65.49 0.00 0.00 105805.58 5523.75 101362.35 00:30:43.000 00:30:43.000 real 0m8.302s 00:30:43.000 user 0m15.033s 00:30:43.000 sys 0m0.462s 00:30:43.000 13:25:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.000 13:25:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:30:43.000 ************************************ 00:30:43.000 END TEST bdev_verify 00:30:43.000 ************************************ 00:30:43.000 13:25:35 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:43.000 13:25:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:30:43.000 13:25:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.000 13:25:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:43.000 ************************************ 00:30:43.000 START TEST bdev_verify_big_io 00:30:43.000 ************************************ 00:30:43.000 13:25:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:43.000 [2024-12-06 13:25:35.882574] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:30:43.001 [2024-12-06 13:25:35.882722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63962 ] 00:30:43.001 [2024-12-06 13:25:36.065319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:43.259 [2024-12-06 13:25:36.225231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.259 [2024-12-06 13:25:36.225257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.196 Running I/O for 5 seconds... 00:30:50.288 1363.00 IOPS, 85.19 MiB/s [2024-12-06T13:25:43.388Z] 3089.50 IOPS, 193.09 MiB/s 00:30:50.288 Latency(us) 00:30:50.288 [2024-12-06T13:25:43.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.288 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x0 length 0xbd0b 00:30:50.288 Nvme0n1 : 5.83 90.77 5.67 0.00 0.00 1367900.15 17476.27 2236962.13 00:30:50.288 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0xbd0b length 0xbd0b 00:30:50.288 Nvme0n1 : 5.83 89.17 5.57 0.00 0.00 1393972.45 29709.65 1653754.15 00:30:50.288 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x0 length 0x4ff8 00:30:50.288 Nvme1n1p1 : 5.83 91.44 5.71 0.00 0.00 1327957.61 42192.70 2268918.74 00:30:50.288 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x4ff8 length 0x4ff8 00:30:50.288 Nvme1n1p1 : 5.83 76.78 4.80 0.00 0.00 1568753.51 149796.57 2396745.14 00:30:50.288 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x0 length 0x4ff7 00:30:50.288 Nvme1n1p2 : 5.83 92.61 5.79 0.00 0.00 1282208.03 48184.56 2300875.34 00:30:50.288 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x4ff7 length 0x4ff7 00:30:50.288 Nvme1n1p2 : 5.84 86.52 5.41 0.00 0.00 1364824.74 91375.91 1605819.25 00:30:50.288 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x0 length 0x8000 00:30:50.288 Nvme2n1 : 5.86 96.47 6.03 0.00 0.00 1206281.08 35202.19 2316853.64 00:30:50.288 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x8000 length 0x8000 00:30:50.288 Nvme2n1 : 5.84 86.84 5.43 0.00 0.00 1323283.81 91875.23 1589840.94 00:30:50.288 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x0 length 0x8000 00:30:50.288 Nvme2n2 : 5.86 96.96 6.06 0.00 0.00 1169727.05 33953.89 2364788.54 00:30:50.288 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x8000 length 0x8000 00:30:50.288 Nvme2n2 : 5.84 91.41 5.71 0.00 0.00 1239772.81 45438.29 1549895.19 00:30:50.288 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x0 length 0x8000 00:30:50.288 Nvme2n3 : 5.86 100.64 6.29 0.00 0.00 1101292.76 23218.47 2396745.14 00:30:50.288 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x8000 length 0x8000 00:30:50.288 Nvme2n3 : 5.85 98.50 6.16 0.00 0.00 1133730.40 2449.80 1581851.79 00:30:50.288 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x0 length 0x2000 00:30:50.288 Nvme3n1 : 5.88 111.61 6.98 0.00 0.00 971429.67 8675.72 2428701.74 00:30:50.288 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:50.288 Verification LBA range: start 0x2000 length 0x2000 00:30:50.288 Nvme3n1 : 5.86 105.43 6.59 0.00 0.00 1036624.74 3557.67 1645765.00 00:30:50.288 [2024-12-06T13:25:43.388Z] =================================================================================================================== 00:30:50.288 [2024-12-06T13:25:43.388Z] Total : 1315.16 82.20 0.00 0.00 1235569.96 2449.80 2428701.74 00:30:52.185 00:30:52.185 real 0m9.493s 00:30:52.185 user 0m17.493s 00:30:52.185 sys 0m0.480s 00:30:52.185 13:25:45 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.185 13:25:45 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:30:52.185 ************************************ 00:30:52.185 END TEST bdev_verify_big_io 00:30:52.185 ************************************ 00:30:52.441 13:25:45 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:52.441 13:25:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:52.441 13:25:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.441 13:25:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:52.441 ************************************ 00:30:52.441 START TEST bdev_write_zeroes 00:30:52.441 ************************************ 00:30:52.441 13:25:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:52.441 [2024-12-06 13:25:45.468860] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:30:52.441 [2024-12-06 13:25:45.469043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64077 ] 00:30:52.698 [2024-12-06 13:25:45.664864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.698 [2024-12-06 13:25:45.791987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.628 Running I/O for 1 seconds... 00:30:54.586 40482.00 IOPS, 158.13 MiB/s 00:30:54.586 Latency(us) 00:30:54.586 [2024-12-06T13:25:47.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.586 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:54.586 Nvme0n1 : 1.03 5192.39 20.28 0.00 0.00 24595.03 6272.73 144803.35 00:30:54.586 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:54.586 Nvme1n1p1 : 1.03 5934.04 23.18 0.00 0.00 21483.79 10236.10 56423.38 00:30:54.586 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:54.586 Nvme1n1p2 : 1.03 5892.80 23.02 0.00 0.00 21554.87 11297.16 64412.53 00:30:54.586 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:54.586 Nvme2n1 : 1.03 5887.15 23.00 0.00 0.00 21471.03 11234.74 64412.53 00:30:54.586 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:54.586 Nvme2n2 : 1.03 5881.48 22.97 0.00 0.00 21449.97 11172.33 64911.85 00:30:54.586 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:54.586 Nvme2n3 : 1.03 5864.77 22.91 0.00 0.00 21447.70 10673.01 64911.85 00:30:54.586 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:54.586 Nvme3n1 : 1.04 5846.96 22.84 0.00 0.00 21446.64 9050.21 64911.85 00:30:54.586 [2024-12-06T13:25:47.686Z] =================================================================================================================== 00:30:54.586 [2024-12-06T13:25:47.686Z] Total : 40499.58 158.20 0.00 0.00 21874.32 6272.73 144803.35 00:30:55.962 00:30:55.962 real 0m3.643s 00:30:55.962 user 0m3.182s 00:30:55.962 sys 0m0.340s 00:30:55.962 13:25:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.962 13:25:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:30:55.962 ************************************ 00:30:55.962 END TEST bdev_write_zeroes 00:30:55.962 ************************************ 00:30:55.962 13:25:49 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:55.962 13:25:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:55.962 13:25:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.962 13:25:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:55.962 ************************************ 00:30:55.962 START TEST bdev_json_nonenclosed 00:30:55.962 ************************************ 00:30:55.962 13:25:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:56.220 [2024-12-06 13:25:49.189576] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:30:56.220 [2024-12-06 13:25:49.189781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64135 ] 00:30:56.479 [2024-12-06 13:25:49.403624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.739 [2024-12-06 13:25:49.612499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.739 [2024-12-06 13:25:49.612654] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:56.739 [2024-12-06 13:25:49.612693] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:56.739 [2024-12-06 13:25:49.612713] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:57.020 00:30:57.020 real 0m0.913s 00:30:57.020 user 0m0.601s 00:30:57.020 sys 0m0.204s 00:30:57.020 13:25:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.020 ************************************ 00:30:57.020 END TEST bdev_json_nonenclosed 00:30:57.020 13:25:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:30:57.020 ************************************ 00:30:57.020 13:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:57.020 13:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:57.020 13:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.020 13:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:57.020 ************************************ 00:30:57.020 START TEST bdev_json_nonarray 00:30:57.020 ************************************ 00:30:57.020 13:25:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:57.279 [2024-12-06 13:25:50.127003] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:30:57.279 [2024-12-06 13:25:50.127219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64166 ] 00:30:57.279 [2024-12-06 13:25:50.318480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.538 [2024-12-06 13:25:50.522313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.538 [2024-12-06 13:25:50.522489] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:57.538 [2024-12-06 13:25:50.522528] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:57.538 [2024-12-06 13:25:50.522547] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:57.797 00:30:57.797 real 0m0.839s 00:30:57.797 user 0m0.553s 00:30:57.797 sys 0m0.180s 00:30:57.797 13:25:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.797 13:25:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:30:57.797 ************************************ 00:30:57.797 END TEST bdev_json_nonarray 00:30:57.797 ************************************ 00:30:58.057 13:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:30:58.057 13:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:30:58.057 13:25:50 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:30:58.057 13:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:58.057 13:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.057 13:25:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:58.057 ************************************ 00:30:58.057 START TEST bdev_gpt_uuid 00:30:58.057 ************************************ 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64196 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64196 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 64196 ']' 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.057 13:25:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:58.057 [2024-12-06 13:25:51.059776] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:30:58.057 [2024-12-06 13:25:51.059921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64196 ] 00:30:58.316 [2024-12-06 13:25:51.250990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.575 [2024-12-06 13:25:51.458267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:59.952 Some configs were skipped because the RPC state that can call them passed over. 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.952 13:25:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:59.952 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.952 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:30:59.952 { 00:30:59.952 "name": "Nvme1n1p1", 00:30:59.952 "aliases": [ 00:30:59.952 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:59.952 ], 00:30:59.952 "product_name": "GPT Disk", 00:30:59.952 "block_size": 4096, 00:30:59.952 "num_blocks": 655104, 00:30:59.952 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:59.952 "assigned_rate_limits": { 00:30:59.952 "rw_ios_per_sec": 0, 00:30:59.952 "rw_mbytes_per_sec": 0, 00:30:59.952 "r_mbytes_per_sec": 0, 00:30:59.952 "w_mbytes_per_sec": 0 00:30:59.952 }, 00:30:59.952 "claimed": false, 00:30:59.952 "zoned": false, 00:30:59.952 "supported_io_types": { 00:30:59.952 "read": true, 00:30:59.952 "write": true, 00:30:59.952 "unmap": true, 00:30:59.952 "flush": true, 00:30:59.952 "reset": true, 00:30:59.952 "nvme_admin": false, 00:30:59.952 "nvme_io": false, 00:30:59.952 "nvme_io_md": false, 00:30:59.952 "write_zeroes": true, 00:30:59.952 "zcopy": false, 00:30:59.952 "get_zone_info": false, 00:30:59.952 "zone_management": false, 00:30:59.952 "zone_append": false, 00:30:59.952 "compare": true, 00:30:59.952 "compare_and_write": false, 00:30:59.952 "abort": true, 00:30:59.952 "seek_hole": false, 00:30:59.952 "seek_data": false, 00:30:59.953 "copy": true, 00:30:59.953 "nvme_iov_md": false 00:30:59.953 }, 00:30:59.953 "driver_specific": { 00:30:59.953 "gpt": { 00:30:59.953 "base_bdev": "Nvme1n1", 00:30:59.953 "offset_blocks": 256, 00:30:59.953 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:59.953 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:59.953 "partition_name": "SPDK_TEST_first" 00:30:59.953 } 00:30:59.953 } 00:30:59.953 } 00:30:59.953 ]' 00:30:59.953 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:30:59.953 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:31:00.211 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:31:00.211 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:31:00.211 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:31:00.211 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:31:00.211 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:31:00.211 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.211 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:31:00.211 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.211 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:31:00.211 { 00:31:00.211 "name": "Nvme1n1p2", 00:31:00.211 "aliases": [ 00:31:00.211 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:31:00.212 ], 00:31:00.212 "product_name": "GPT Disk", 00:31:00.212 "block_size": 4096, 00:31:00.212 "num_blocks": 655103, 00:31:00.212 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:31:00.212 "assigned_rate_limits": { 00:31:00.212 "rw_ios_per_sec": 0, 00:31:00.212 "rw_mbytes_per_sec": 0, 00:31:00.212 "r_mbytes_per_sec": 0, 00:31:00.212 "w_mbytes_per_sec": 0 00:31:00.212 }, 00:31:00.212 "claimed": false, 00:31:00.212 "zoned": false, 00:31:00.212 "supported_io_types": { 00:31:00.212 "read": true, 00:31:00.212 "write": true, 00:31:00.212 "unmap": true, 00:31:00.212 "flush": true, 00:31:00.212 "reset": true, 00:31:00.212 "nvme_admin": false, 00:31:00.212 "nvme_io": false, 00:31:00.212 "nvme_io_md": false, 00:31:00.212 "write_zeroes": true, 00:31:00.212 "zcopy": false, 00:31:00.212 "get_zone_info": false, 00:31:00.212 "zone_management": false, 00:31:00.212 "zone_append": false, 00:31:00.212 "compare": true, 00:31:00.212 "compare_and_write": false, 00:31:00.212 "abort": true, 00:31:00.212 "seek_hole": false, 00:31:00.212 "seek_data": false, 00:31:00.212 "copy": true, 00:31:00.212 "nvme_iov_md": false 00:31:00.212 }, 00:31:00.212 "driver_specific": { 00:31:00.212 "gpt": { 00:31:00.212 "base_bdev": "Nvme1n1", 00:31:00.212 "offset_blocks": 655360, 00:31:00.212 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:31:00.212 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:31:00.212 "partition_name": "SPDK_TEST_second" 00:31:00.212 } 00:31:00.212 } 00:31:00.212 } 00:31:00.212 ]' 00:31:00.212 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:31:00.212 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:31:00.212 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:31:00.212 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:31:00.212 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 64196 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 64196 ']' 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 64196 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64196 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:00.471 killing process with pid 64196 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64196' 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 64196 00:31:00.471 13:25:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 64196 00:31:03.753 00:31:03.753 real 0m5.218s 00:31:03.753 user 0m5.271s 00:31:03.753 sys 0m0.792s 00:31:03.753 13:25:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.753 13:25:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:31:03.753 ************************************ 00:31:03.753 END TEST bdev_gpt_uuid 00:31:03.753 ************************************ 00:31:03.753 13:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:31:03.753 13:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:31:03.753 13:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:31:03.753 13:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:31:03.753 13:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:03.753 13:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:31:03.753 13:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:31:03.753 13:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:31:03.753 13:25:56 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:03.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:03.753 Waiting for block devices as requested 00:31:04.011 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:04.012 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:04.012 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:04.271 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:09.556 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:09.556 13:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:31:09.556 13:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:31:09.556 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:31:09.556 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:31:09.556 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:31:09.556 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:31:09.556 13:26:02 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:31:09.556 00:31:09.556 real 1m12.300s 00:31:09.556 user 1m30.088s 00:31:09.556 sys 0m14.329s 00:31:09.556 13:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.556 13:26:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:31:09.556 ************************************ 00:31:09.556 END TEST blockdev_nvme_gpt 00:31:09.556 ************************************ 00:31:09.556 13:26:02 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:31:09.556 13:26:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:09.556 13:26:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:09.556 13:26:02 -- common/autotest_common.sh@10 -- # set +x 00:31:09.556 ************************************ 00:31:09.556 START TEST nvme 00:31:09.556 ************************************ 00:31:09.556 13:26:02 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:31:09.816 * Looking for test storage... 00:31:09.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:09.816 13:26:02 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:09.816 13:26:02 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:31:09.816 13:26:02 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:09.816 13:26:02 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:09.816 13:26:02 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.816 13:26:02 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.816 13:26:02 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.816 13:26:02 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.816 13:26:02 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.816 13:26:02 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.816 13:26:02 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.816 13:26:02 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.816 13:26:02 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.816 13:26:02 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.816 13:26:02 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.816 13:26:02 nvme -- scripts/common.sh@344 -- # case "$op" in 00:31:09.816 13:26:02 nvme -- scripts/common.sh@345 -- # : 1 00:31:09.816 13:26:02 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.816 13:26:02 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.816 13:26:02 nvme -- scripts/common.sh@365 -- # decimal 1 00:31:09.816 13:26:02 nvme -- scripts/common.sh@353 -- # local d=1 00:31:09.816 13:26:02 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.816 13:26:02 nvme -- scripts/common.sh@355 -- # echo 1 00:31:09.816 13:26:02 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.816 13:26:02 nvme -- scripts/common.sh@366 -- # decimal 2 00:31:09.816 13:26:02 nvme -- scripts/common.sh@353 -- # local d=2 00:31:09.816 13:26:02 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.816 13:26:02 nvme -- scripts/common.sh@355 -- # echo 2 00:31:09.816 13:26:02 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.816 13:26:02 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.816 13:26:02 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.816 13:26:02 nvme -- scripts/common.sh@368 -- # return 0 00:31:09.816 13:26:02 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.816 13:26:02 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:09.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.816 --rc genhtml_branch_coverage=1 00:31:09.816 --rc genhtml_function_coverage=1 00:31:09.816 --rc genhtml_legend=1 00:31:09.816 --rc geninfo_all_blocks=1 00:31:09.816 --rc geninfo_unexecuted_blocks=1 00:31:09.816 00:31:09.816 ' 00:31:09.816 13:26:02 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:09.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.816 --rc genhtml_branch_coverage=1 00:31:09.816 --rc genhtml_function_coverage=1 00:31:09.816 --rc genhtml_legend=1 00:31:09.816 --rc geninfo_all_blocks=1 00:31:09.816 --rc geninfo_unexecuted_blocks=1 00:31:09.816 00:31:09.816 ' 00:31:09.816 13:26:02 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:09.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.816 --rc genhtml_branch_coverage=1 00:31:09.816 --rc genhtml_function_coverage=1 00:31:09.816 --rc genhtml_legend=1 00:31:09.816 --rc geninfo_all_blocks=1 00:31:09.816 --rc geninfo_unexecuted_blocks=1 00:31:09.816 00:31:09.816 ' 00:31:09.816 13:26:02 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:09.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.816 --rc genhtml_branch_coverage=1 00:31:09.816 --rc genhtml_function_coverage=1 00:31:09.816 --rc genhtml_legend=1 00:31:09.816 --rc geninfo_all_blocks=1 00:31:09.816 --rc geninfo_unexecuted_blocks=1 00:31:09.816 00:31:09.816 ' 00:31:09.816 13:26:02 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:10.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:11.009 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:11.009 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:11.009 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:31:11.266 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:31:11.266 13:26:04 nvme -- nvme/nvme.sh@79 -- # uname 00:31:11.266 13:26:04 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:31:11.266 13:26:04 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:31:11.266 13:26:04 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:31:11.266 13:26:04 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:31:11.266 13:26:04 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:31:11.266 13:26:04 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:31:11.266 13:26:04 nvme -- common/autotest_common.sh@1075 -- # stubpid=64861 00:31:11.266 13:26:04 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:31:11.266 Waiting for stub to ready for secondary processes... 00:31:11.266 13:26:04 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:31:11.266 13:26:04 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:31:11.266 13:26:04 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64861 ]] 00:31:11.266 13:26:04 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:31:11.266 [2024-12-06 13:26:04.362265] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:31:11.266 [2024-12-06 13:26:04.362492] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:31:12.636 13:26:05 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:31:12.636 13:26:05 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64861 ]] 00:31:12.636 13:26:05 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:31:13.199 [2024-12-06 13:26:06.194087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:13.470 13:26:06 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:31:13.470 13:26:06 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64861 ]] 00:31:13.470 13:26:06 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:31:13.470 [2024-12-06 13:26:06.360931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.470 [2024-12-06 13:26:06.361118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.470 [2024-12-06 13:26:06.361175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.470 [2024-12-06 13:26:06.381059] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:31:13.470 [2024-12-06 13:26:06.381129] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:31:13.470 [2024-12-06 13:26:06.396647] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:31:13.470 [2024-12-06 13:26:06.396830] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:31:13.470 [2024-12-06 13:26:06.400446] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:31:13.470 [2024-12-06 13:26:06.400721] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:31:13.470 [2024-12-06 13:26:06.400814] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:31:13.470 [2024-12-06 13:26:06.404283] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:31:13.470 [2024-12-06 13:26:06.404517] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:31:13.470 [2024-12-06 13:26:06.404607] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:31:13.470 [2024-12-06 13:26:06.408036] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:31:13.470 [2024-12-06 13:26:06.408424] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:31:13.470 [2024-12-06 13:26:06.408517] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:31:13.470 [2024-12-06 13:26:06.408594] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:31:13.470 [2024-12-06 13:26:06.408672] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:31:14.403 13:26:07 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:31:14.404 done. 00:31:14.404 13:26:07 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:31:14.404 13:26:07 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:31:14.404 13:26:07 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:31:14.404 13:26:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.404 13:26:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:14.404 ************************************ 00:31:14.404 START TEST nvme_reset 00:31:14.404 ************************************ 00:31:14.404 13:26:07 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:31:14.661 Initializing NVMe Controllers 00:31:14.661 Skipping QEMU NVMe SSD at 0000:00:10.0 00:31:14.661 Skipping QEMU NVMe SSD at 0000:00:11.0 00:31:14.661 Skipping QEMU NVMe SSD at 0000:00:13.0 00:31:14.661 Skipping QEMU NVMe SSD at 0000:00:12.0 00:31:14.661 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:31:14.661 00:31:14.661 real 0m0.417s 00:31:14.661 user 0m0.148s 00:31:14.661 sys 0m0.200s 00:31:14.661 13:26:07 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.661 13:26:07 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:31:14.661 ************************************ 00:31:14.661 END TEST nvme_reset 00:31:14.661 ************************************ 00:31:14.920 13:26:07 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:31:14.920 13:26:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:14.920 13:26:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.920 13:26:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:14.920 ************************************ 00:31:14.920 START TEST nvme_identify 00:31:14.920 ************************************ 00:31:14.920 13:26:07 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:31:14.920 13:26:07 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:31:14.920 13:26:07 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:31:14.920 13:26:07 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:31:14.920 13:26:07 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:31:14.920 13:26:07 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:14.920 13:26:07 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:31:14.920 13:26:07 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:14.920 13:26:07 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:14.920 13:26:07 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:14.920 13:26:07 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:31:14.920 13:26:07 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:14.920 13:26:07 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:31:15.179 [2024-12-06 13:26:08.228702] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64901 terminated unexpected 00:31:15.179 ===================================================== 00:31:15.179 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:15.179 ===================================================== 00:31:15.179 Controller Capabilities/Features 00:31:15.179 ================================ 00:31:15.179 Vendor ID: 1b36 00:31:15.179 Subsystem Vendor ID: 1af4 00:31:15.179 Serial Number: 12340 00:31:15.179 Model Number: QEMU NVMe Ctrl 00:31:15.179 Firmware Version: 8.0.0 00:31:15.179 Recommended Arb Burst: 6 00:31:15.179 IEEE OUI Identifier: 00 54 52 00:31:15.179 Multi-path I/O 00:31:15.179 May have multiple subsystem ports: No 00:31:15.179 May have multiple controllers: No 00:31:15.179 Associated with SR-IOV VF: No 00:31:15.179 Max Data Transfer Size: 524288 00:31:15.179 Max Number of Namespaces: 256 00:31:15.179 Max Number of I/O Queues: 64 00:31:15.179 NVMe Specification Version (VS): 1.4 00:31:15.179 NVMe Specification Version (Identify): 1.4 00:31:15.179 Maximum Queue Entries: 2048 00:31:15.179 Contiguous Queues Required: Yes 00:31:15.179 Arbitration Mechanisms Supported 00:31:15.179 Weighted Round Robin: Not Supported 00:31:15.179 Vendor Specific: Not Supported 00:31:15.179 Reset Timeout: 7500 ms 00:31:15.179 Doorbell Stride: 4 bytes 00:31:15.179 NVM Subsystem Reset: Not Supported 00:31:15.179 Command Sets Supported 00:31:15.179 NVM Command Set: Supported 00:31:15.179 Boot Partition: Not Supported 00:31:15.179 Memory Page Size Minimum: 4096 bytes 00:31:15.179 Memory Page Size Maximum: 65536 bytes 00:31:15.179 Persistent Memory Region: Not Supported 00:31:15.179 Optional Asynchronous Events Supported 00:31:15.179 Namespace Attribute Notices: Supported 00:31:15.179 Firmware Activation Notices: Not Supported 00:31:15.179 ANA Change Notices: Not Supported 00:31:15.179 PLE Aggregate Log Change Notices: Not Supported 00:31:15.179 LBA Status Info Alert Notices: Not Supported 00:31:15.179 EGE Aggregate Log Change Notices: Not Supported 00:31:15.179 Normal NVM Subsystem Shutdown event: Not Supported 00:31:15.179 Zone Descriptor Change Notices: Not Supported 00:31:15.179 Discovery Log Change Notices: Not Supported 00:31:15.179 Controller Attributes 00:31:15.179 128-bit Host Identifier: Not Supported 00:31:15.179 Non-Operational Permissive Mode: Not Supported 00:31:15.180 NVM Sets: Not Supported 00:31:15.180 Read Recovery Levels: Not Supported 00:31:15.180 Endurance Groups: Not Supported 00:31:15.180 Predictable Latency Mode: Not Supported 00:31:15.180 Traffic Based Keep ALive: Not Supported 00:31:15.180 Namespace Granularity: Not Supported 00:31:15.180 SQ Associations: Not Supported 00:31:15.180 UUID List: Not Supported 00:31:15.180 Multi-Domain Subsystem: Not Supported 00:31:15.180 Fixed Capacity Management: Not Supported 00:31:15.180 Variable Capacity Management: Not Supported 00:31:15.180 Delete Endurance Group: Not Supported 00:31:15.180 Delete NVM Set: Not Supported 00:31:15.180 Extended LBA Formats Supported: Supported 00:31:15.180 Flexible Data Placement Supported: Not Supported 00:31:15.180 00:31:15.180 Controller Memory Buffer Support 00:31:15.180 ================================ 00:31:15.180 Supported: No 00:31:15.180 00:31:15.180 Persistent Memory Region Support 00:31:15.180 ================================ 00:31:15.180 Supported: No 00:31:15.180 00:31:15.180 Admin Command Set Attributes 00:31:15.180 ============================ 00:31:15.180 Security Send/Receive: Not Supported 00:31:15.180 Format NVM: Supported 00:31:15.180 Firmware Activate/Download: Not Supported 00:31:15.180 Namespace Management: Supported 00:31:15.180 Device Self-Test: Not Supported 00:31:15.180 Directives: Supported 00:31:15.180 NVMe-MI: Not Supported 00:31:15.180 Virtualization Management: Not Supported 00:31:15.180 Doorbell Buffer Config: Supported 00:31:15.180 Get LBA Status Capability: Not Supported 00:31:15.180 Command & Feature Lockdown Capability: Not Supported 00:31:15.180 Abort Command Limit: 4 00:31:15.180 Async Event Request Limit: 4 00:31:15.180 Number of Firmware Slots: N/A 00:31:15.180 Firmware Slot 1 Read-Only: N/A 00:31:15.180 Firmware Activation Without Reset: N/A 00:31:15.180 Multiple Update Detection Support: N/A 00:31:15.180 Firmware Update Granularity: No Information Provided 00:31:15.180 Per-Namespace SMART Log: Yes 00:31:15.180 Asymmetric Namespace Access Log Page: Not Supported 00:31:15.180 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:31:15.180 Command Effects Log Page: Supported 00:31:15.180 Get Log Page Extended Data: Supported 00:31:15.180 Telemetry Log Pages: Not Supported 00:31:15.180 Persistent Event Log Pages: Not Supported 00:31:15.180 Supported Log Pages Log Page: May Support 00:31:15.180 Commands Supported & Effects Log Page: Not Supported 00:31:15.180 Feature Identifiers & Effects Log Page:May Support 00:31:15.180 NVMe-MI Commands & Effects Log Page: May Support 00:31:15.180 Data Area 4 for Telemetry Log: Not Supported 00:31:15.180 Error Log Page Entries Supported: 1 00:31:15.180 Keep Alive: Not Supported 00:31:15.180 00:31:15.180 NVM Command Set Attributes 00:31:15.180 ========================== 00:31:15.180 Submission Queue Entry Size 00:31:15.180 Max: 64 00:31:15.180 Min: 64 00:31:15.180 Completion Queue Entry Size 00:31:15.180 Max: 16 00:31:15.180 Min: 16 00:31:15.180 Number of Namespaces: 256 00:31:15.180 Compare Command: Supported 00:31:15.180 Write Uncorrectable Command: Not Supported 00:31:15.180 Dataset Management Command: Supported 00:31:15.180 Write Zeroes Command: Supported 00:31:15.180 Set Features Save Field: Supported 00:31:15.180 Reservations: Not Supported 00:31:15.180 Timestamp: Supported 00:31:15.180 Copy: Supported 00:31:15.180 Volatile Write Cache: Present 00:31:15.180 Atomic Write Unit (Normal): 1 00:31:15.180 Atomic Write Unit (PFail): 1 00:31:15.180 Atomic Compare & Write Unit: 1 00:31:15.180 Fused Compare & Write: Not Supported 00:31:15.180 Scatter-Gather List 00:31:15.180 SGL Command Set: Supported 00:31:15.180 SGL Keyed: Not Supported 00:31:15.180 SGL Bit Bucket Descriptor: Not Supported 00:31:15.180 SGL Metadata Pointer: Not Supported 00:31:15.180 Oversized SGL: Not Supported 00:31:15.180 SGL Metadata Address: Not Supported 00:31:15.180 SGL Offset: Not Supported 00:31:15.180 Transport SGL Data Block: Not Supported 00:31:15.180 Replay Protected Memory Block: Not Supported 00:31:15.180 00:31:15.180 Firmware Slot Information 00:31:15.180 ========================= 00:31:15.180 Active slot: 1 00:31:15.180 Slot 1 Firmware Revision: 1.0 00:31:15.180 00:31:15.180 00:31:15.180 Commands Supported and Effects 00:31:15.180 ============================== 00:31:15.180 Admin Commands 00:31:15.180 -------------- 00:31:15.180 Delete I/O Submission Queue (00h): Supported 00:31:15.180 Create I/O Submission Queue (01h): Supported 00:31:15.180 Get Log Page (02h): Supported 00:31:15.180 Delete I/O Completion Queue (04h): Supported 00:31:15.180 Create I/O Completion Queue (05h): Supported 00:31:15.180 Identify (06h): Supported 00:31:15.180 Abort (08h): Supported 00:31:15.180 Set Features (09h): Supported 00:31:15.180 Get Features (0Ah): Supported 00:31:15.180 Asynchronous Event Request (0Ch): Supported 00:31:15.180 Namespace Attachment (15h): Supported NS-Inventory-Change 00:31:15.180 Directive Send (19h): Supported 00:31:15.180 Directive Receive (1Ah): Supported 00:31:15.180 Virtualization Management (1Ch): Supported 00:31:15.180 Doorbell Buffer Config (7Ch): Supported 00:31:15.180 Format NVM (80h): Supported LBA-Change 00:31:15.180 I/O Commands 00:31:15.180 ------------ 00:31:15.180 Flush (00h): Supported LBA-Change 00:31:15.180 Write (01h): Supported LBA-Change 00:31:15.180 Read (02h): Supported 00:31:15.180 Compare (05h): Supported 00:31:15.180 Write Zeroes (08h): Supported LBA-Change 00:31:15.180 Dataset Management (09h): Supported LBA-Change 00:31:15.180 Unknown (0Ch): Supported 00:31:15.180 Unknown (12h): Supported 00:31:15.180 Copy (19h): Supported LBA-Change 00:31:15.180 Unknown (1Dh): Supported LBA-Change 00:31:15.180 00:31:15.180 Error Log 00:31:15.180 ========= 00:31:15.180 00:31:15.180 Arbitration 00:31:15.180 =========== 00:31:15.180 Arbitration Burst: no limit 00:31:15.180 00:31:15.180 Power Management 00:31:15.180 ================ 00:31:15.180 Number of Power States: 1 00:31:15.180 Current Power State: Power State #0 00:31:15.180 Power State #0: 00:31:15.180 Max Power: 25.00 W 00:31:15.180 Non-Operational State: Operational 00:31:15.180 Entry Latency: 16 microseconds 00:31:15.180 Exit Latency: 4 microseconds 00:31:15.180 Relative Read Throughput: 0 00:31:15.180 Relative Read Latency: 0 00:31:15.180 Relative Write Throughput: 0 00:31:15.180 Relative Write Latency: 0 00:31:15.180 Idle Power[2024-12-06 13:26:08.230215] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64901 terminated unexpected 00:31:15.180 : Not Reported 00:31:15.180 Active Power: Not Reported 00:31:15.180 Non-Operational Permissive Mode: Not Supported 00:31:15.180 00:31:15.180 Health Information 00:31:15.180 ================== 00:31:15.180 Critical Warnings: 00:31:15.180 Available Spare Space: OK 00:31:15.180 Temperature: OK 00:31:15.180 Device Reliability: OK 00:31:15.180 Read Only: No 00:31:15.180 Volatile Memory Backup: OK 00:31:15.180 Current Temperature: 323 Kelvin (50 Celsius) 00:31:15.180 Temperature Threshold: 343 Kelvin (70 Celsius) 00:31:15.180 Available Spare: 0% 00:31:15.180 Available Spare Threshold: 0% 00:31:15.180 Life Percentage Used: 0% 00:31:15.180 Data Units Read: 641 00:31:15.180 Data Units Written: 569 00:31:15.180 Host Read Commands: 30164 00:31:15.180 Host Write Commands: 29950 00:31:15.180 Controller Busy Time: 0 minutes 00:31:15.180 Power Cycles: 0 00:31:15.180 Power On Hours: 0 hours 00:31:15.180 Unsafe Shutdowns: 0 00:31:15.180 Unrecoverable Media Errors: 0 00:31:15.180 Lifetime Error Log Entries: 0 00:31:15.180 Warning Temperature Time: 0 minutes 00:31:15.180 Critical Temperature Time: 0 minutes 00:31:15.180 00:31:15.180 Number of Queues 00:31:15.180 ================ 00:31:15.180 Number of I/O Submission Queues: 64 00:31:15.180 Number of I/O Completion Queues: 64 00:31:15.180 00:31:15.180 ZNS Specific Controller Data 00:31:15.180 ============================ 00:31:15.180 Zone Append Size Limit: 0 00:31:15.180 00:31:15.180 00:31:15.180 Active Namespaces 00:31:15.180 ================= 00:31:15.180 Namespace ID:1 00:31:15.180 Error Recovery Timeout: Unlimited 00:31:15.180 Command Set Identifier: NVM (00h) 00:31:15.180 Deallocate: Supported 00:31:15.180 Deallocated/Unwritten Error: Supported 00:31:15.180 Deallocated Read Value: All 0x00 00:31:15.180 Deallocate in Write Zeroes: Not Supported 00:31:15.180 Deallocated Guard Field: 0xFFFF 00:31:15.180 Flush: Supported 00:31:15.180 Reservation: Not Supported 00:31:15.181 Metadata Transferred as: Separate Metadata Buffer 00:31:15.181 Namespace Sharing Capabilities: Private 00:31:15.181 Size (in LBAs): 1548666 (5GiB) 00:31:15.181 Capacity (in LBAs): 1548666 (5GiB) 00:31:15.181 Utilization (in LBAs): 1548666 (5GiB) 00:31:15.181 Thin Provisioning: Not Supported 00:31:15.181 Per-NS Atomic Units: No 00:31:15.181 Maximum Single Source Range Length: 128 00:31:15.181 Maximum Copy Length: 128 00:31:15.181 Maximum Source Range Count: 128 00:31:15.181 NGUID/EUI64 Never Reused: No 00:31:15.181 Namespace Write Protected: No 00:31:15.181 Number of LBA Formats: 8 00:31:15.181 Current LBA Format: LBA Format #07 00:31:15.181 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:15.181 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:15.181 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:15.181 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:15.181 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:15.181 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:15.181 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:15.181 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:15.181 00:31:15.181 NVM Specific Namespace Data 00:31:15.181 =========================== 00:31:15.181 Logical Block Storage Tag Mask: 0 00:31:15.181 Protection Information Capabilities: 00:31:15.181 16b Guard Protection Information Storage Tag Support: No 00:31:15.181 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:15.181 Storage Tag Check Read Support: No 00:31:15.181 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.181 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.181 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.181 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.181 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.181 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.181 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.181 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.181 ===================================================== 00:31:15.181 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:15.181 ===================================================== 00:31:15.181 Controller Capabilities/Features 00:31:15.181 ================================ 00:31:15.181 Vendor ID: 1b36 00:31:15.181 Subsystem Vendor ID: 1af4 00:31:15.181 Serial Number: 12341 00:31:15.181 Model Number: QEMU NVMe Ctrl 00:31:15.181 Firmware Version: 8.0.0 00:31:15.181 Recommended Arb Burst: 6 00:31:15.181 IEEE OUI Identifier: 00 54 52 00:31:15.181 Multi-path I/O 00:31:15.181 May have multiple subsystem ports: No 00:31:15.181 May have multiple controllers: No 00:31:15.181 Associated with SR-IOV VF: No 00:31:15.181 Max Data Transfer Size: 524288 00:31:15.181 Max Number of Namespaces: 256 00:31:15.181 Max Number of I/O Queues: 64 00:31:15.181 NVMe Specification Version (VS): 1.4 00:31:15.181 NVMe Specification Version (Identify): 1.4 00:31:15.181 Maximum Queue Entries: 2048 00:31:15.181 Contiguous Queues Required: Yes 00:31:15.181 Arbitration Mechanisms Supported 00:31:15.181 Weighted Round Robin: Not Supported 00:31:15.181 Vendor Specific: Not Supported 00:31:15.181 Reset Timeout: 7500 ms 00:31:15.181 Doorbell Stride: 4 bytes 00:31:15.181 NVM Subsystem Reset: Not Supported 00:31:15.181 Command Sets Supported 00:31:15.181 NVM Command Set: Supported 00:31:15.181 Boot Partition: Not Supported 00:31:15.181 Memory Page Size Minimum: 4096 bytes 00:31:15.181 Memory Page Size Maximum: 65536 bytes 00:31:15.181 Persistent Memory Region: Not Supported 00:31:15.181 Optional Asynchronous Events Supported 00:31:15.181 Namespace Attribute Notices: Supported 00:31:15.181 Firmware Activation Notices: Not Supported 00:31:15.181 ANA Change Notices: Not Supported 00:31:15.181 PLE Aggregate Log Change Notices: Not Supported 00:31:15.181 LBA Status Info Alert Notices: Not Supported 00:31:15.181 EGE Aggregate Log Change Notices: Not Supported 00:31:15.181 Normal NVM Subsystem Shutdown event: Not Supported 00:31:15.181 Zone Descriptor Change Notices: Not Supported 00:31:15.181 Discovery Log Change Notices: Not Supported 00:31:15.181 Controller Attributes 00:31:15.181 128-bit Host Identifier: Not Supported 00:31:15.181 Non-Operational Permissive Mode: Not Supported 00:31:15.181 NVM Sets: Not Supported 00:31:15.181 Read Recovery Levels: Not Supported 00:31:15.181 Endurance Groups: Not Supported 00:31:15.181 Predictable Latency Mode: Not Supported 00:31:15.181 Traffic Based Keep ALive: Not Supported 00:31:15.181 Namespace Granularity: Not Supported 00:31:15.181 SQ Associations: Not Supported 00:31:15.181 UUID List: Not Supported 00:31:15.181 Multi-Domain Subsystem: Not Supported 00:31:15.181 Fixed Capacity Management: Not Supported 00:31:15.181 Variable Capacity Management: Not Supported 00:31:15.181 Delete Endurance Group: Not Supported 00:31:15.181 Delete NVM Set: Not Supported 00:31:15.181 Extended LBA Formats Supported: Supported 00:31:15.181 Flexible Data Placement Supported: Not Supported 00:31:15.181 00:31:15.181 Controller Memory Buffer Support 00:31:15.181 ================================ 00:31:15.181 Supported: No 00:31:15.181 00:31:15.181 Persistent Memory Region Support 00:31:15.181 ================================ 00:31:15.181 Supported: No 00:31:15.181 00:31:15.181 Admin Command Set Attributes 00:31:15.181 ============================ 00:31:15.181 Security Send/Receive: Not Supported 00:31:15.181 Format NVM: Supported 00:31:15.181 Firmware Activate/Download: Not Supported 00:31:15.181 Namespace Management: Supported 00:31:15.181 Device Self-Test: Not Supported 00:31:15.181 Directives: Supported 00:31:15.181 NVMe-MI: Not Supported 00:31:15.181 Virtualization Management: Not Supported 00:31:15.181 Doorbell Buffer Config: Supported 00:31:15.181 Get LBA Status Capability: Not Supported 00:31:15.181 Command & Feature Lockdown Capability: Not Supported 00:31:15.181 Abort Command Limit: 4 00:31:15.181 Async Event Request Limit: 4 00:31:15.181 Number of Firmware Slots: N/A 00:31:15.181 Firmware Slot 1 Read-Only: N/A 00:31:15.181 Firmware Activation Without Reset: N/A 00:31:15.181 Multiple Update Detection Support: N/A 00:31:15.181 Firmware Update Granularity: No Information Provided 00:31:15.181 Per-Namespace SMART Log: Yes 00:31:15.181 Asymmetric Namespace Access Log Page: Not Supported 00:31:15.181 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:31:15.181 Command Effects Log Page: Supported 00:31:15.181 Get Log Page Extended Data: Supported 00:31:15.181 Telemetry Log Pages: Not Supported 00:31:15.181 Persistent Event Log Pages: Not Supported 00:31:15.181 Supported Log Pages Log Page: May Support 00:31:15.181 Commands Supported & Effects Log Page: Not Supported 00:31:15.181 Feature Identifiers & Effects Log Page:May Support 00:31:15.181 NVMe-MI Commands & Effects Log Page: May Support 00:31:15.181 Data Area 4 for Telemetry Log: Not Supported 00:31:15.181 Error Log Page Entries Supported: 1 00:31:15.181 Keep Alive: Not Supported 00:31:15.181 00:31:15.181 NVM Command Set Attributes 00:31:15.181 ========================== 00:31:15.181 Submission Queue Entry Size 00:31:15.181 Max: 64 00:31:15.181 Min: 64 00:31:15.181 Completion Queue Entry Size 00:31:15.181 Max: 16 00:31:15.181 Min: 16 00:31:15.181 Number of Namespaces: 256 00:31:15.181 Compare Command: Supported 00:31:15.181 Write Uncorrectable Command: Not Supported 00:31:15.181 Dataset Management Command: Supported 00:31:15.181 Write Zeroes Command: Supported 00:31:15.181 Set Features Save Field: Supported 00:31:15.181 Reservations: Not Supported 00:31:15.181 Timestamp: Supported 00:31:15.181 Copy: Supported 00:31:15.181 Volatile Write Cache: Present 00:31:15.181 Atomic Write Unit (Normal): 1 00:31:15.181 Atomic Write Unit (PFail): 1 00:31:15.181 Atomic Compare & Write Unit: 1 00:31:15.181 Fused Compare & Write: Not Supported 00:31:15.181 Scatter-Gather List 00:31:15.181 SGL Command Set: Supported 00:31:15.181 SGL Keyed: Not Supported 00:31:15.181 SGL Bit Bucket Descriptor: Not Supported 00:31:15.181 SGL Metadata Pointer: Not Supported 00:31:15.181 Oversized SGL: Not Supported 00:31:15.181 SGL Metadata Address: Not Supported 00:31:15.181 SGL Offset: Not Supported 00:31:15.181 Transport SGL Data Block: Not Supported 00:31:15.181 Replay Protected Memory Block: Not Supported 00:31:15.181 00:31:15.181 Firmware Slot Information 00:31:15.181 ========================= 00:31:15.181 Active slot: 1 00:31:15.181 Slot 1 Firmware Revision: 1.0 00:31:15.181 00:31:15.181 00:31:15.181 Commands Supported and Effects 00:31:15.181 ============================== 00:31:15.181 Admin Commands 00:31:15.181 -------------- 00:31:15.181 Delete I/O Submission Queue (00h): Supported 00:31:15.181 Create I/O Submission Queue (01h): Supported 00:31:15.182 Get Log Page (02h): Supported 00:31:15.182 Delete I/O Completion Queue (04h): Supported 00:31:15.182 Create I/O Completion Queue (05h): Supported 00:31:15.182 Identify (06h): Supported 00:31:15.182 Abort (08h): Supported 00:31:15.182 Set Features (09h): Supported 00:31:15.182 Get Features (0Ah): Supported 00:31:15.182 Asynchronous Event Request (0Ch): Supported 00:31:15.182 Namespace Attachment (15h): Supported NS-Inventory-Change 00:31:15.182 Directive Send (19h): Supported 00:31:15.182 Directive Receive (1Ah): Supported 00:31:15.182 Virtualization Management (1Ch): Supported 00:31:15.182 Doorbell Buffer Config (7Ch): Supported 00:31:15.182 Format NVM (80h): Supported LBA-Change 00:31:15.182 I/O Commands 00:31:15.182 ------------ 00:31:15.182 Flush (00h): Supported LBA-Change 00:31:15.182 Write (01h): Supported LBA-Change 00:31:15.182 Read (02h): Supported 00:31:15.182 Compare (05h): Supported 00:31:15.182 Write Zeroes (08h): Supported LBA-Change 00:31:15.182 Dataset Management (09h): Supported LBA-Change 00:31:15.182 Unknown (0Ch): Supported 00:31:15.182 Unknown (12h): Supported 00:31:15.182 Copy (19h): Supported LBA-Change 00:31:15.182 Unknown (1Dh): Supported LBA-Change 00:31:15.182 00:31:15.182 Error Log 00:31:15.182 ========= 00:31:15.182 00:31:15.182 Arbitration 00:31:15.182 =========== 00:31:15.182 Arbitration Burst: no limit 00:31:15.182 00:31:15.182 Power Management 00:31:15.182 ================ 00:31:15.182 Number of Power States: 1 00:31:15.182 Current Power State: Power State #0 00:31:15.182 Power State #0: 00:31:15.182 Max Power: 25.00 W 00:31:15.182 Non-Operational State: Operational 00:31:15.182 Entry Latency: 16 microseconds 00:31:15.182 Exit Latency: 4 microseconds 00:31:15.182 Relative Read Throughput: 0 00:31:15.182 Relative Read Latency: 0 00:31:15.182 Relative Write Throughput: 0 00:31:15.182 Relative Write Latency: 0 00:31:15.182 Idle Power: Not Reported 00:31:15.182 Active Power: Not Reported 00:31:15.182 Non-Operational Permissive Mode: Not Supported 00:31:15.182 00:31:15.182 Health Information 00:31:15.182 ================== 00:31:15.182 Critical Warnings: 00:31:15.182 Available Spare Space: OK 00:31:15.182 Temperature: [2024-12-06 13:26:08.231343] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64901 terminated unexpected 00:31:15.182 OK 00:31:15.182 Device Reliability: OK 00:31:15.182 Read Only: No 00:31:15.182 Volatile Memory Backup: OK 00:31:15.182 Current Temperature: 323 Kelvin (50 Celsius) 00:31:15.182 Temperature Threshold: 343 Kelvin (70 Celsius) 00:31:15.182 Available Spare: 0% 00:31:15.182 Available Spare Threshold: 0% 00:31:15.182 Life Percentage Used: 0% 00:31:15.182 Data Units Read: 939 00:31:15.182 Data Units Written: 806 00:31:15.182 Host Read Commands: 44421 00:31:15.182 Host Write Commands: 43205 00:31:15.182 Controller Busy Time: 0 minutes 00:31:15.182 Power Cycles: 0 00:31:15.182 Power On Hours: 0 hours 00:31:15.182 Unsafe Shutdowns: 0 00:31:15.182 Unrecoverable Media Errors: 0 00:31:15.182 Lifetime Error Log Entries: 0 00:31:15.182 Warning Temperature Time: 0 minutes 00:31:15.182 Critical Temperature Time: 0 minutes 00:31:15.182 00:31:15.182 Number of Queues 00:31:15.182 ================ 00:31:15.182 Number of I/O Submission Queues: 64 00:31:15.182 Number of I/O Completion Queues: 64 00:31:15.182 00:31:15.182 ZNS Specific Controller Data 00:31:15.182 ============================ 00:31:15.182 Zone Append Size Limit: 0 00:31:15.182 00:31:15.182 00:31:15.182 Active Namespaces 00:31:15.182 ================= 00:31:15.182 Namespace ID:1 00:31:15.182 Error Recovery Timeout: Unlimited 00:31:15.182 Command Set Identifier: NVM (00h) 00:31:15.182 Deallocate: Supported 00:31:15.182 Deallocated/Unwritten Error: Supported 00:31:15.182 Deallocated Read Value: All 0x00 00:31:15.182 Deallocate in Write Zeroes: Not Supported 00:31:15.182 Deallocated Guard Field: 0xFFFF 00:31:15.182 Flush: Supported 00:31:15.182 Reservation: Not Supported 00:31:15.182 Namespace Sharing Capabilities: Private 00:31:15.182 Size (in LBAs): 1310720 (5GiB) 00:31:15.182 Capacity (in LBAs): 1310720 (5GiB) 00:31:15.182 Utilization (in LBAs): 1310720 (5GiB) 00:31:15.182 Thin Provisioning: Not Supported 00:31:15.182 Per-NS Atomic Units: No 00:31:15.182 Maximum Single Source Range Length: 128 00:31:15.182 Maximum Copy Length: 128 00:31:15.182 Maximum Source Range Count: 128 00:31:15.182 NGUID/EUI64 Never Reused: No 00:31:15.182 Namespace Write Protected: No 00:31:15.182 Number of LBA Formats: 8 00:31:15.182 Current LBA Format: LBA Format #04 00:31:15.182 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:15.182 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:15.182 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:15.182 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:15.182 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:15.182 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:15.182 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:15.182 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:15.182 00:31:15.182 NVM Specific Namespace Data 00:31:15.182 =========================== 00:31:15.182 Logical Block Storage Tag Mask: 0 00:31:15.182 Protection Information Capabilities: 00:31:15.182 16b Guard Protection Information Storage Tag Support: No 00:31:15.182 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:15.182 Storage Tag Check Read Support: No 00:31:15.182 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.182 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.182 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.182 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.182 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.182 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.182 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.182 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.182 ===================================================== 00:31:15.182 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:15.182 ===================================================== 00:31:15.182 Controller Capabilities/Features 00:31:15.182 ================================ 00:31:15.182 Vendor ID: 1b36 00:31:15.182 Subsystem Vendor ID: 1af4 00:31:15.182 Serial Number: 12343 00:31:15.182 Model Number: QEMU NVMe Ctrl 00:31:15.182 Firmware Version: 8.0.0 00:31:15.182 Recommended Arb Burst: 6 00:31:15.182 IEEE OUI Identifier: 00 54 52 00:31:15.182 Multi-path I/O 00:31:15.182 May have multiple subsystem ports: No 00:31:15.182 May have multiple controllers: Yes 00:31:15.182 Associated with SR-IOV VF: No 00:31:15.182 Max Data Transfer Size: 524288 00:31:15.182 Max Number of Namespaces: 256 00:31:15.182 Max Number of I/O Queues: 64 00:31:15.182 NVMe Specification Version (VS): 1.4 00:31:15.182 NVMe Specification Version (Identify): 1.4 00:31:15.182 Maximum Queue Entries: 2048 00:31:15.182 Contiguous Queues Required: Yes 00:31:15.182 Arbitration Mechanisms Supported 00:31:15.182 Weighted Round Robin: Not Supported 00:31:15.182 Vendor Specific: Not Supported 00:31:15.182 Reset Timeout: 7500 ms 00:31:15.182 Doorbell Stride: 4 bytes 00:31:15.182 NVM Subsystem Reset: Not Supported 00:31:15.182 Command Sets Supported 00:31:15.182 NVM Command Set: Supported 00:31:15.182 Boot Partition: Not Supported 00:31:15.182 Memory Page Size Minimum: 4096 bytes 00:31:15.182 Memory Page Size Maximum: 65536 bytes 00:31:15.182 Persistent Memory Region: Not Supported 00:31:15.182 Optional Asynchronous Events Supported 00:31:15.182 Namespace Attribute Notices: Supported 00:31:15.182 Firmware Activation Notices: Not Supported 00:31:15.182 ANA Change Notices: Not Supported 00:31:15.182 PLE Aggregate Log Change Notices: Not Supported 00:31:15.182 LBA Status Info Alert Notices: Not Supported 00:31:15.182 EGE Aggregate Log Change Notices: Not Supported 00:31:15.182 Normal NVM Subsystem Shutdown event: Not Supported 00:31:15.182 Zone Descriptor Change Notices: Not Supported 00:31:15.182 Discovery Log Change Notices: Not Supported 00:31:15.182 Controller Attributes 00:31:15.182 128-bit Host Identifier: Not Supported 00:31:15.182 Non-Operational Permissive Mode: Not Supported 00:31:15.182 NVM Sets: Not Supported 00:31:15.182 Read Recovery Levels: Not Supported 00:31:15.183 Endurance Groups: Supported 00:31:15.183 Predictable Latency Mode: Not Supported 00:31:15.183 Traffic Based Keep ALive: Not Supported 00:31:15.183 Namespace Granularity: Not Supported 00:31:15.183 SQ Associations: Not Supported 00:31:15.183 UUID List: Not Supported 00:31:15.183 Multi-Domain Subsystem: Not Supported 00:31:15.183 Fixed Capacity Management: Not Supported 00:31:15.183 Variable Capacity Management: Not Supported 00:31:15.183 Delete Endurance Group: Not Supported 00:31:15.183 Delete NVM Set: Not Supported 00:31:15.183 Extended LBA Formats Supported: Supported 00:31:15.183 Flexible Data Placement Supported: Supported 00:31:15.183 00:31:15.183 Controller Memory Buffer Support 00:31:15.183 ================================ 00:31:15.183 Supported: No 00:31:15.183 00:31:15.183 Persistent Memory Region Support 00:31:15.183 ================================ 00:31:15.183 Supported: No 00:31:15.183 00:31:15.183 Admin Command Set Attributes 00:31:15.183 ============================ 00:31:15.183 Security Send/Receive: Not Supported 00:31:15.183 Format NVM: Supported 00:31:15.183 Firmware Activate/Download: Not Supported 00:31:15.183 Namespace Management: Supported 00:31:15.183 Device Self-Test: Not Supported 00:31:15.183 Directives: Supported 00:31:15.183 NVMe-MI: Not Supported 00:31:15.183 Virtualization Management: Not Supported 00:31:15.183 Doorbell Buffer Config: Supported 00:31:15.183 Get LBA Status Capability: Not Supported 00:31:15.183 Command & Feature Lockdown Capability: Not Supported 00:31:15.183 Abort Command Limit: 4 00:31:15.183 Async Event Request Limit: 4 00:31:15.183 Number of Firmware Slots: N/A 00:31:15.183 Firmware Slot 1 Read-Only: N/A 00:31:15.183 Firmware Activation Without Reset: N/A 00:31:15.183 Multiple Update Detection Support: N/A 00:31:15.183 Firmware Update Granularity: No Information Provided 00:31:15.183 Per-Namespace SMART Log: Yes 00:31:15.183 Asymmetric Namespace Access Log Page: Not Supported 00:31:15.183 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:31:15.183 Command Effects Log Page: Supported 00:31:15.183 Get Log Page Extended Data: Supported 00:31:15.183 Telemetry Log Pages: Not Supported 00:31:15.183 Persistent Event Log Pages: Not Supported 00:31:15.183 Supported Log Pages Log Page: May Support 00:31:15.183 Commands Supported & Effects Log Page: Not Supported 00:31:15.183 Feature Identifiers & Effects Log Page:May Support 00:31:15.183 NVMe-MI Commands & Effects Log Page: May Support 00:31:15.183 Data Area 4 for Telemetry Log: Not Supported 00:31:15.183 Error Log Page Entries Supported: 1 00:31:15.183 Keep Alive: Not Supported 00:31:15.183 00:31:15.183 NVM Command Set Attributes 00:31:15.183 ========================== 00:31:15.183 Submission Queue Entry Size 00:31:15.183 Max: 64 00:31:15.183 Min: 64 00:31:15.183 Completion Queue Entry Size 00:31:15.183 Max: 16 00:31:15.183 Min: 16 00:31:15.183 Number of Namespaces: 256 00:31:15.183 Compare Command: Supported 00:31:15.183 Write Uncorrectable Command: Not Supported 00:31:15.183 Dataset Management Command: Supported 00:31:15.183 Write Zeroes Command: Supported 00:31:15.183 Set Features Save Field: Supported 00:31:15.183 Reservations: Not Supported 00:31:15.183 Timestamp: Supported 00:31:15.183 Copy: Supported 00:31:15.183 Volatile Write Cache: Present 00:31:15.183 Atomic Write Unit (Normal): 1 00:31:15.183 Atomic Write Unit (PFail): 1 00:31:15.183 Atomic Compare & Write Unit: 1 00:31:15.183 Fused Compare & Write: Not Supported 00:31:15.183 Scatter-Gather List 00:31:15.183 SGL Command Set: Supported 00:31:15.183 SGL Keyed: Not Supported 00:31:15.183 SGL Bit Bucket Descriptor: Not Supported 00:31:15.183 SGL Metadata Pointer: Not Supported 00:31:15.183 Oversized SGL: Not Supported 00:31:15.183 SGL Metadata Address: Not Supported 00:31:15.183 SGL Offset: Not Supported 00:31:15.183 Transport SGL Data Block: Not Supported 00:31:15.183 Replay Protected Memory Block: Not Supported 00:31:15.183 00:31:15.183 Firmware Slot Information 00:31:15.183 ========================= 00:31:15.183 Active slot: 1 00:31:15.183 Slot 1 Firmware Revision: 1.0 00:31:15.183 00:31:15.183 00:31:15.183 Commands Supported and Effects 00:31:15.183 ============================== 00:31:15.183 Admin Commands 00:31:15.183 -------------- 00:31:15.183 Delete I/O Submission Queue (00h): Supported 00:31:15.183 Create I/O Submission Queue (01h): Supported 00:31:15.183 Get Log Page (02h): Supported 00:31:15.183 Delete I/O Completion Queue (04h): Supported 00:31:15.183 Create I/O Completion Queue (05h): Supported 00:31:15.183 Identify (06h): Supported 00:31:15.183 Abort (08h): Supported 00:31:15.183 Set Features (09h): Supported 00:31:15.183 Get Features (0Ah): Supported 00:31:15.183 Asynchronous Event Request (0Ch): Supported 00:31:15.183 Namespace Attachment (15h): Supported NS-Inventory-Change 00:31:15.183 Directive Send (19h): Supported 00:31:15.183 Directive Receive (1Ah): Supported 00:31:15.183 Virtualization Management (1Ch): Supported 00:31:15.183 Doorbell Buffer Config (7Ch): Supported 00:31:15.183 Format NVM (80h): Supported LBA-Change 00:31:15.183 I/O Commands 00:31:15.183 ------------ 00:31:15.183 Flush (00h): Supported LBA-Change 00:31:15.183 Write (01h): Supported LBA-Change 00:31:15.183 Read (02h): Supported 00:31:15.183 Compare (05h): Supported 00:31:15.183 Write Zeroes (08h): Supported LBA-Change 00:31:15.183 Dataset Management (09h): Supported LBA-Change 00:31:15.183 Unknown (0Ch): Supported 00:31:15.183 Unknown (12h): Supported 00:31:15.183 Copy (19h): Supported LBA-Change 00:31:15.183 Unknown (1Dh): Supported LBA-Change 00:31:15.183 00:31:15.183 Error Log 00:31:15.183 ========= 00:31:15.183 00:31:15.183 Arbitration 00:31:15.183 =========== 00:31:15.183 Arbitration Burst: no limit 00:31:15.183 00:31:15.183 Power Management 00:31:15.183 ================ 00:31:15.183 Number of Power States: 1 00:31:15.183 Current Power State: Power State #0 00:31:15.183 Power State #0: 00:31:15.183 Max Power: 25.00 W 00:31:15.183 Non-Operational State: Operational 00:31:15.183 Entry Latency: 16 microseconds 00:31:15.183 Exit Latency: 4 microseconds 00:31:15.183 Relative Read Throughput: 0 00:31:15.183 Relative Read Latency: 0 00:31:15.183 Relative Write Throughput: 0 00:31:15.183 Relative Write Latency: 0 00:31:15.183 Idle Power: Not Reported 00:31:15.183 Active Power: Not Reported 00:31:15.183 Non-Operational Permissive Mode: Not Supported 00:31:15.183 00:31:15.183 Health Information 00:31:15.183 ================== 00:31:15.183 Critical Warnings: 00:31:15.183 Available Spare Space: OK 00:31:15.183 Temperature: OK 00:31:15.183 Device Reliability: OK 00:31:15.183 Read Only: No 00:31:15.183 Volatile Memory Backup: OK 00:31:15.183 Current Temperature: 323 Kelvin (50 Celsius) 00:31:15.183 Temperature Threshold: 343 Kelvin (70 Celsius) 00:31:15.183 Available Spare: 0% 00:31:15.183 Available Spare Threshold: 0% 00:31:15.183 Life Percentage Used: 0% 00:31:15.183 Data Units Read: 716 00:31:15.183 Data Units Written: 645 00:31:15.183 Host Read Commands: 31113 00:31:15.183 Host Write Commands: 30536 00:31:15.183 Controller Busy Time: 0 minutes 00:31:15.183 Power Cycles: 0 00:31:15.183 Power On Hours: 0 hours 00:31:15.183 Unsafe Shutdowns: 0 00:31:15.183 Unrecoverable Media Errors: 0 00:31:15.183 Lifetime Error Log Entries: 0 00:31:15.183 Warning Temperature Time: 0 minutes 00:31:15.183 Critical Temperature Time: 0 minutes 00:31:15.183 00:31:15.183 Number of Queues 00:31:15.183 ================ 00:31:15.183 Number of I/O Submission Queues: 64 00:31:15.183 Number of I/O Completion Queues: 64 00:31:15.183 00:31:15.183 ZNS Specific Controller Data 00:31:15.183 ============================ 00:31:15.183 Zone Append Size Limit: 0 00:31:15.183 00:31:15.183 00:31:15.183 Active Namespaces 00:31:15.183 ================= 00:31:15.183 Namespace ID:1 00:31:15.183 Error Recovery Timeout: Unlimited 00:31:15.183 Command Set Identifier: NVM (00h) 00:31:15.183 Deallocate: Supported 00:31:15.183 Deallocated/Unwritten Error: Supported 00:31:15.183 Deallocated Read Value: All 0x00 00:31:15.183 Deallocate in Write Zeroes: Not Supported 00:31:15.183 Deallocated Guard Field: 0xFFFF 00:31:15.183 Flush: Supported 00:31:15.183 Reservation: Not Supported 00:31:15.183 Namespace Sharing Capabilities: Multiple Controllers 00:31:15.183 Size (in LBAs): 262144 (1GiB) 00:31:15.183 Capacity (in LBAs): 262144 (1GiB) 00:31:15.183 Utilization (in LBAs): 262144 (1GiB) 00:31:15.183 Thin Provisioning: Not Supported 00:31:15.183 Per-NS Atomic Units: No 00:31:15.183 Maximum Single Source Range Length: 128 00:31:15.184 Maximum Copy Length: 128 00:31:15.184 Maximum Source Range Count: 128 00:31:15.184 NGUID/EUI64 Never Reused: No 00:31:15.184 Namespace Write Protected: No 00:31:15.184 Endurance group ID: 1 00:31:15.184 Number of LBA Formats: 8 00:31:15.184 Current LBA Format: LBA Format #04 00:31:15.184 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:15.184 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:15.184 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:15.184 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:15.184 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:15.184 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:15.184 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:15.184 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:15.184 00:31:15.184 Get Feature FDP: 00:31:15.184 ================ 00:31:15.184 Enabled: Yes 00:31:15.184 FDP configuration index: 0 00:31:15.184 00:31:15.184 FDP configurations log page 00:31:15.184 =========================== 00:31:15.184 Number of FDP configurations: 1 00:31:15.184 Version: 0 00:31:15.184 Size: 112 00:31:15.184 FDP Configuration Descriptor: 0 00:31:15.184 Descriptor Size: 96 00:31:15.184 Reclaim Group Identifier format: 2 00:31:15.184 FDP Volatile Write Cache: Not Present 00:31:15.184 FDP Configuration: Valid 00:31:15.184 Vendor Specific Size: 0 00:31:15.184 Number of Reclaim Groups: 2 00:31:15.184 Number of Recalim Unit Handles: 8 00:31:15.184 Max Placement Identifiers: 128 00:31:15.184 Number of Namespaces Suppprted: 256 00:31:15.184 Reclaim unit Nominal Size: 6000000 bytes 00:31:15.184 Estimated Reclaim Unit Time Limit: Not Reported 00:31:15.184 RUH Desc #000: RUH Type: Initially Isolated 00:31:15.184 RUH Desc #001: RUH Type: Initially Isolated 00:31:15.184 RUH Desc #002: RUH Type: Initially Isolated 00:31:15.184 RUH Desc #003: RUH Type: Initially Isolated 00:31:15.184 RUH Desc #004: RUH Type: Initially Isolated 00:31:15.184 RUH Desc #005: RUH Type: Initially Isolated 00:31:15.184 RUH Desc #006: RUH Type: Initially Isolated 00:31:15.184 RUH Desc #007: RUH Type: Initially Isolated 00:31:15.184 00:31:15.184 FDP reclaim unit handle usage log page 00:31:15.184 ====================================== 00:31:15.184 Number of Reclaim Unit Handles: 8 00:31:15.184 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:31:15.184 RUH Usage Desc #001: RUH Attributes: Unused 00:31:15.184 RUH Usage Desc #002: RUH Attributes: Unused 00:31:15.184 RUH Usage Desc #003: RUH Attributes: Unused 00:31:15.184 RUH Usage Desc #004: RUH Attributes: Unused 00:31:15.184 RUH Usage Desc #005: RUH Attributes: Unused 00:31:15.184 RUH Usage Desc #006: RUH Attributes: Unused 00:31:15.184 RUH Usage Desc #007: RUH Attributes: Unused 00:31:15.184 00:31:15.184 FDP statistics log page 00:31:15.184 ======================= 00:31:15.184 Host bytes with metadata written: 410267648 00:31:15.184 Media bytes with metadata written: 410320896 00:31:15.184 Media bytes erased: 0 00:31:15.184 00:31:15.184 FDP events log page 00:31:15.184 =================== 00:31:15.184 Number of FDP events: 0 00:31:15.184 00:31:15.184 NVM Specific Namespace Data 00:31:15.184 =========================== 00:31:15.184 Logical Block Storage Tag Mask: 0 00:31:15.184 Protection Information Capabilities: 00:31:15.184 16b Guard Protection Information Storage Tag Support: No 00:31:15.184 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:15.184 Storage Tag Check Read Support: No 00:31:15.184 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.184 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.184 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.184 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.184 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.184 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.184 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.184 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.184 ===================================================== 00:31:15.184 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:15.184 ===================================================== 00:31:15.184 Controller Capabilities/Features 00:31:15.184 ================================ 00:31:15.184 Vendor ID: 1b36 00:31:15.184 Subsystem Vendor ID: 1af4 00:31:15.184 Serial Number: 12342 00:31:15.184 Model Number: QEMU NVMe Ctrl 00:31:15.184 Firmware Version: 8.0.0 00:31:15.184 Recommended Arb Burst: 6 00:31:15.184 IEEE OUI Identifier: 00 54 52 00:31:15.184 Multi-path I/O 00:31:15.184 May have multiple subsystem ports: No 00:31:15.184 May have multiple controllers: No 00:31:15.184 Associated with SR-IOV VF: No 00:31:15.184 Max Data Transfer Size: 524288 00:31:15.184 Max Number of Namespaces: 256 00:31:15.184 Max Number of I/O Queues: 64 00:31:15.184 NVMe Specification Version (VS): 1.4 00:31:15.184 NVMe Specification Version (Identify): 1.4 00:31:15.184 Maximum Queue Entries: 2048 00:31:15.184 Contiguous Queues Required: Yes 00:31:15.184 Arbitration Mechanisms Supported 00:31:15.184 Weighted Round Robin: Not Supported 00:31:15.184 Vendor Specific: Not Supported 00:31:15.184 Reset Timeout: 7500 ms 00:31:15.184 Doorbell Stride: 4 bytes 00:31:15.184 NVM Subsystem Reset: Not Supported 00:31:15.184 Command Sets Supported 00:31:15.184 NVM Command Set: Supported 00:31:15.184 Boot Partition: Not Supported 00:31:15.184 Memory Page Size Minimum: 4096 bytes 00:31:15.184 Memory Page Size Maximum: 65536 bytes 00:31:15.184 Persistent Memory Region: Not Supported 00:31:15.184 Optional Asynchronous Events Supported 00:31:15.184 Namespace Attribute Notices: Supported 00:31:15.184 Firmware Activation Notices: Not Supported 00:31:15.184 ANA Change Notices: Not Supported 00:31:15.184 PLE Aggregate Log Change Notices: Not Supported 00:31:15.184 LBA Status Info Alert Notices: Not Supported 00:31:15.184 EGE Aggregate Log Change Notices: Not Supported 00:31:15.184 Normal NVM Subsystem Shutdown event: Not Supported 00:31:15.184 Zone Descriptor Change Notices: Not Supported 00:31:15.184 Discovery Log Change Notices: Not Supported 00:31:15.184 Controller Attributes 00:31:15.184 128-bit Host Identifier: Not Supported 00:31:15.184 Non-Operational Permissive Mode: Not Supported 00:31:15.184 NVM Sets: Not Supported 00:31:15.184 Read Recovery Levels: Not Supported 00:31:15.184 Endurance Groups: Not Supported 00:31:15.184 Predictable Latency Mode: Not Supported 00:31:15.184 Traffic Based Keep ALive: Not Supported 00:31:15.184 Namespace Granularity: Not Supported 00:31:15.184 SQ Associations: Not Supported 00:31:15.184 UUID List: Not Supported 00:31:15.184 Multi-Domain Subsystem: Not Supported 00:31:15.184 Fixed Capacity Management: Not Supported 00:31:15.184 Variable Capacity Management: Not Supported 00:31:15.184 Delete Endurance Group: Not Supported 00:31:15.184 Delete NVM Set: Not Supported 00:31:15.184 Extended LBA Formats Supported: Supported 00:31:15.184 Flexible Data Placement Supported: Not Supported 00:31:15.184 00:31:15.185 Controller Memory Buffer Support 00:31:15.185 ================================ 00:31:15.185 Supported: No 00:31:15.185 00:31:15.185 Persistent Memory Region Support 00:31:15.185 ================================ 00:31:15.185 Supported: No 00:31:15.185 00:31:15.185 Admin Command Set Attributes 00:31:15.185 ============================ 00:31:15.185 Security Send/Receive: Not Supported 00:31:15.185 Format NVM: Supported 00:31:15.185 Firmware Activate/Download: Not Supported 00:31:15.185 Namespace Management: Supported 00:31:15.185 Device Self-Test: Not Supported 00:31:15.185 Directives: Supported 00:31:15.185 NVMe-MI: Not Supported 00:31:15.185 Virtualization Management: Not Supported 00:31:15.185 Doorbell Buffer Config: Supported 00:31:15.185 Get LBA Status Capability: Not Supported 00:31:15.185 Command & Feature Lockdown Capability: Not Supported 00:31:15.185 Abort Command Limit: 4 00:31:15.185 Async Event Request Limit: 4 00:31:15.185 Number of Firmware Slots: N/A 00:31:15.185 Firmware Slot 1 Read-Only: N/A 00:31:15.185 Firmware Activation Without Reset: N/A 00:31:15.185 Multiple Update Detection Support: N/A 00:31:15.185 Firmware Update Granularity: No Information Provided 00:31:15.185 Per-Namespace SMART Log: Yes 00:31:15.185 Asymmetric Namespace Access Log Page: Not Supported 00:31:15.185 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:31:15.185 Command Effects Log Page: Supported 00:31:15.185 Get Log Page Extended Data: Supported 00:31:15.185 Telemetry Log Pages: Not Supported 00:31:15.185 Persistent Event Log Pages: Not Supported 00:31:15.185 Supported Log Pages Log Page: May Support 00:31:15.185 Commands Supported & Effects Log Page: Not Supported 00:31:15.185 Feature Identifiers & Effects Log Page:May Support 00:31:15.185 NVMe-MI Commands & Effects Log Page: May Support 00:31:15.185 Data Area 4 for Telemetry Log: Not Supported 00:31:15.185 Error Log Page Entries Supported: 1 00:31:15.185 Keep Alive: Not Supported 00:31:15.185 00:31:15.185 NVM Command Set Attributes 00:31:15.185 ========================== 00:31:15.185 Submission Queue Entry Size 00:31:15.185 Max: 64 00:31:15.185 Min: 64 00:31:15.185 Completion Queue Entry Size 00:31:15.185 Max: 16 00:31:15.185 Min: 16 00:31:15.185 Number of Namespaces: 256 00:31:15.185 Compare Command: Supported 00:31:15.185 Write Uncorrectable Command: Not Supported 00:31:15.185 Dataset Management Command: Supported 00:31:15.185 Write Zeroes Command: Supported 00:31:15.185 Set Features Save Field: Supported 00:31:15.185 Reservations: Not Supported 00:31:15.185 Timestamp: Supported 00:31:15.185 Copy: Supported 00:31:15.185 Volatile Write Cache: Present 00:31:15.185 Atomic Write Unit (Normal): 1 00:31:15.185 Atomic Write Unit (PFail): 1 00:31:15.185 Atomic Compare & Write Unit: 1 00:31:15.185 Fused Compare & Write: Not Supported 00:31:15.185 Scatter-Gather List 00:31:15.185 SGL Command Set: Supported 00:31:15.185 SGL Keyed: Not Supported 00:31:15.185 SGL Bit Bucket Descriptor: Not Supported 00:31:15.185 SGL Metadata Pointer: Not Supported 00:31:15.185 Oversized SGL: Not Supported 00:31:15.185 SGL Metadata Address: Not Supported 00:31:15.185 SGL Offset: Not Supported 00:31:15.185 Transport SGL Data Block: Not Supported 00:31:15.185 Replay Protected Memory Block: Not Supported 00:31:15.185 00:31:15.185 Firmware Slot Information 00:31:15.185 ========================= 00:31:15.185 Active slot: 1 00:31:15.185 Slot 1 Firmware Revision: 1.0 00:31:15.185 00:31:15.185 00:31:15.185 Commands Supported and Effects 00:31:15.185 ============================== 00:31:15.185 Admin Commands 00:31:15.185 -------------- 00:31:15.185 Delete I/O Submission Queue (00h): Supported 00:31:15.185 Create I/O Submission Queue (01h): Supported 00:31:15.185 Get Log Page (02h): Supported 00:31:15.185 Delete I/O Completion Queue (04h): Supported 00:31:15.185 Create I/O Completion Queue (05h): Supported 00:31:15.185 Identify (06h): Supported 00:31:15.185 Abort (08h): Supported 00:31:15.185 Set Features (09h): Supported 00:31:15.185 Get Features (0Ah): Supported 00:31:15.185 Asynchronous Event Request (0Ch): Supported 00:31:15.185 Namespace Attachment (15h): Supported NS-Inventory-Change 00:31:15.185 Directive Send (19h): Supported 00:31:15.185 Directive Receive (1Ah): Supported 00:31:15.185 Virtualization Management (1Ch): Supported 00:31:15.185 Doorbell Buffer Config (7Ch): Supported 00:31:15.185 Format NVM (80h): Supported LBA-Change 00:31:15.185 I/O Commands 00:31:15.185 ------------ 00:31:15.185 Flush (00h): Supported LBA-Change 00:31:15.185 Write (01h): Supported LBA-Change 00:31:15.185 Read (02h): Supported 00:31:15.185 Compare (05h): Supported 00:31:15.185 Write Zeroes (08h): Supported LBA-Change 00:31:15.185 Dataset Management (09h): Supported LBA-Change 00:31:15.185 Unknown (0Ch): Supported 00:31:15.185 Unknown (12h): Supported 00:31:15.185 Copy (19h): Supported LBA-Change 00:31:15.185 Unknown (1Dh): Supported LBA-Change 00:31:15.185 00:31:15.185 Error Log 00:31:15.185 ========= 00:31:15.185 00:31:15.185 Arbitration 00:31:15.185 =========== 00:31:15.185 Arbitration Burst: no limit 00:31:15.185 00:31:15.185 Power Management 00:31:15.185 ================ 00:31:15.185 Number of Power States: 1 00:31:15.185 Current Power State: Power State #0 00:31:15.185 Power State #0: 00:31:15.185 Max Power: 25.00 W 00:31:15.185 Non-Operational State: Operational 00:31:15.185 Entry Latency: 16 microseconds 00:31:15.185 Exit Latency: 4 microseconds 00:31:15.185 Relative Read Throughput: 0 00:31:15.185 Relative Read Latency: 0 00:31:15.185 Relative Write Throughput: 0 00:31:15.185 Relative Write Latency: 0 00:31:15.185 Idle Power: Not Reported 00:31:15.185 Active Power: Not Reported 00:31:15.185 Non-Operational Permissive Mode: Not Supported 00:31:15.185 00:31:15.185 Health Information 00:31:15.185 ================== 00:31:15.185 Critical Warnings: 00:31:15.185 Available Spare Space: OK 00:31:15.185 Temperature: OK 00:31:15.185 Device Reliability: OK 00:31:15.185 Read Only: No 00:31:15.185 Volatile Memory Backup: OK 00:31:15.185 Current Temperature: 323 Kelvin (50 Celsius) 00:31:15.185 Temperature Threshold: 343 Kelvin (70 Celsius) 00:31:15.185 Available Spare: 0% 00:31:15.185 Available Spare Threshold: 0% 00:31:15.185 Life Percentage Used: 0% 00:31:15.185 Data Units Read: 1995 00:31:15.185 Data Units Written: 1782 00:31:15.185 Host Read Commands: 92026 00:31:15.185 Host Write Commands: 90295 00:31:15.185 Controller Busy Time: 0 minutes 00:31:15.185 Power Cycles: 0 00:31:15.185 Power On Hours: 0 hours 00:31:15.185 Unsafe Shutdowns: 0 00:31:15.185 Unrecoverable Media Errors: 0 00:31:15.185 Lifetime Error Log Entries: 0 00:31:15.185 Warning Temperature Time: 0 minutes 00:31:15.185 Critical Temperature Time: 0 minutes 00:31:15.185 00:31:15.185 Number of Queues 00:31:15.185 ================ 00:31:15.185 Number of I/O Submission Queues: 64 00:31:15.185 Number of I/O Completion Queues: 64 00:31:15.185 00:31:15.185 ZNS Specific Controller Data 00:31:15.185 ============================ 00:31:15.185 Zone Append Size Limit: 0 00:31:15.185 00:31:15.185 00:31:15.185 Active Namespaces 00:31:15.185 ================= 00:31:15.185 Namespace ID:1 00:31:15.185 Error Recovery Timeout: Unlimited 00:31:15.185 Command Set Identifier: NVM (00h) 00:31:15.185 Deallocate: Supported 00:31:15.185 Deallocated/Unwritten Error: Supported 00:31:15.185 Deallocated Read Value: All 0x00 00:31:15.185 Deallocate in Write Zeroes: Not Supported 00:31:15.185 Deallocated Guard Field: 0xFFFF 00:31:15.185 Flush: Supported 00:31:15.185 Reservation: Not Supported 00:31:15.185 Namespace Sharing Capabilities: Private 00:31:15.185 Size (in LBAs): 1048576 (4GiB) 00:31:15.185 Capacity (in LBAs): 1048576 (4GiB) 00:31:15.185 Utilization (in LBAs): 1048576 (4GiB) 00:31:15.185 Thin Provisioning: Not Supported 00:31:15.185 Per-NS Atomic Units: No 00:31:15.185 Maximum Single Source Range Length: 128 00:31:15.185 Maximum Copy Length: 128 00:31:15.185 Maximum Source Range Count: 128 00:31:15.185 NGUID/EUI64 Never Reused: No 00:31:15.185 Namespace Write Protected: No 00:31:15.185 Number of LBA Formats: 8 00:31:15.185 Current LBA Format: LBA Format #04 00:31:15.185 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:15.185 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:15.185 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:15.186 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:15.186 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:15.186 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:15.186 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:15.186 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:15.186 00:31:15.186 NVM Specific Namespace Data 00:31:15.186 =========================== 00:31:15.186 Logical Block Storage Tag Mask: 0 00:31:15.186 Protection Information Capabilities: 00:31:15.186 16b Guard Protection Information Storage Tag Support: No 00:31:15.186 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:15.186 Storage Tag Check Read Support: No 00:31:15.186 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Namespace ID:2 00:31:15.186 Error Recovery Timeout: Unlimited 00:31:15.186 Command Set Identifier: NVM (00h) 00:31:15.186 Deallocate: Supported 00:31:15.186 Deallocated/Unwritten Error: Supported 00:31:15.186 Deallocated Read Value: All 0x00 00:31:15.186 Deallocate in Write Zeroes: Not Supported 00:31:15.186 Deallocated Guard Field: 0xFFFF 00:31:15.186 Flush: Supported 00:31:15.186 Reservation: Not Supported 00:31:15.186 Namespace Sharing Capabilities: Private 00:31:15.186 Size (in LBAs): 1048576 (4GiB) 00:31:15.186 Capacity (in LBAs): 1048576 (4GiB) 00:31:15.186 Utilization (in LBAs): 1048576 (4GiB) 00:31:15.186 Thin Provisioning: Not Supported 00:31:15.186 Per-NS Atomic Units: No 00:31:15.186 Maximum Single Source Range Length: 128 00:31:15.186 Maximum Copy Length: 128 00:31:15.186 Maximum Source Range Count: 128 00:31:15.186 NGUID/EUI64 Never Reused: No 00:31:15.186 Namespace Write Protected: No 00:31:15.186 Number of LBA Formats: 8 00:31:15.186 Current LBA Format: LBA Format #04 00:31:15.186 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:15.186 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:15.186 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:15.186 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:15.186 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:15.186 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:15.186 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:15.186 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:15.186 00:31:15.186 NVM Specific Namespace Data 00:31:15.186 =========================== 00:31:15.186 Logical Block Storage Tag Mask: 0 00:31:15.186 Protection Information Capabilities: 00:31:15.186 16b Guard Protection Information Storage Tag Support: No 00:31:15.186 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:15.186 Storage Tag Check Read Support: No 00:31:15.186 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.186 Namespace ID:3 00:31:15.186 Error Recovery Timeout: Unlimited 00:31:15.186 Command Set Identifier: NVM (00h) 00:31:15.186 Deallocate: Supported 00:31:15.186 Deallocated/Unwritten Error: Supported 00:31:15.186 Deallocated Read Value: All 0x00 00:31:15.186 Deallocate in Write Zeroes: Not Supported 00:31:15.186 Deallocated Guard Field: 0xFFFF 00:31:15.186 Flush: Supported 00:31:15.186 Reservation: Not Supported 00:31:15.186 Namespace Sharing Capabilities: Private 00:31:15.186 Size (in LBAs): 1048576 (4GiB) 00:31:15.186 Capacity (in LBAs): [2024-12-06 13:26:08.233455] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64901 terminated unexpected 00:31:15.443 1048576 (4GiB) 00:31:15.443 Utilization (in LBAs): 1048576 (4GiB) 00:31:15.443 Thin Provisioning: Not Supported 00:31:15.443 Per-NS Atomic Units: No 00:31:15.443 Maximum Single Source Range Length: 128 00:31:15.443 Maximum Copy Length: 128 00:31:15.443 Maximum Source Range Count: 128 00:31:15.443 NGUID/EUI64 Never Reused: No 00:31:15.443 Namespace Write Protected: No 00:31:15.443 Number of LBA Formats: 8 00:31:15.443 Current LBA Format: LBA Format #04 00:31:15.443 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:15.443 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:15.443 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:15.443 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:15.443 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:15.443 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:15.443 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:15.443 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:15.443 00:31:15.443 NVM Specific Namespace Data 00:31:15.443 =========================== 00:31:15.443 Logical Block Storage Tag Mask: 0 00:31:15.443 Protection Information Capabilities: 00:31:15.443 16b Guard Protection Information Storage Tag Support: No 00:31:15.443 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:15.443 Storage Tag Check Read Support: No 00:31:15.443 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.443 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.443 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.443 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.443 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.443 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.443 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.443 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.443 13:26:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:31:15.443 13:26:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:31:15.701 ===================================================== 00:31:15.701 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:15.701 ===================================================== 00:31:15.701 Controller Capabilities/Features 00:31:15.701 ================================ 00:31:15.701 Vendor ID: 1b36 00:31:15.701 Subsystem Vendor ID: 1af4 00:31:15.701 Serial Number: 12340 00:31:15.701 Model Number: QEMU NVMe Ctrl 00:31:15.701 Firmware Version: 8.0.0 00:31:15.701 Recommended Arb Burst: 6 00:31:15.701 IEEE OUI Identifier: 00 54 52 00:31:15.701 Multi-path I/O 00:31:15.701 May have multiple subsystem ports: No 00:31:15.701 May have multiple controllers: No 00:31:15.701 Associated with SR-IOV VF: No 00:31:15.701 Max Data Transfer Size: 524288 00:31:15.701 Max Number of Namespaces: 256 00:31:15.701 Max Number of I/O Queues: 64 00:31:15.701 NVMe Specification Version (VS): 1.4 00:31:15.701 NVMe Specification Version (Identify): 1.4 00:31:15.701 Maximum Queue Entries: 2048 00:31:15.701 Contiguous Queues Required: Yes 00:31:15.701 Arbitration Mechanisms Supported 00:31:15.701 Weighted Round Robin: Not Supported 00:31:15.701 Vendor Specific: Not Supported 00:31:15.701 Reset Timeout: 7500 ms 00:31:15.701 Doorbell Stride: 4 bytes 00:31:15.701 NVM Subsystem Reset: Not Supported 00:31:15.701 Command Sets Supported 00:31:15.701 NVM Command Set: Supported 00:31:15.701 Boot Partition: Not Supported 00:31:15.701 Memory Page Size Minimum: 4096 bytes 00:31:15.701 Memory Page Size Maximum: 65536 bytes 00:31:15.701 Persistent Memory Region: Not Supported 00:31:15.701 Optional Asynchronous Events Supported 00:31:15.701 Namespace Attribute Notices: Supported 00:31:15.701 Firmware Activation Notices: Not Supported 00:31:15.701 ANA Change Notices: Not Supported 00:31:15.701 PLE Aggregate Log Change Notices: Not Supported 00:31:15.701 LBA Status Info Alert Notices: Not Supported 00:31:15.701 EGE Aggregate Log Change Notices: Not Supported 00:31:15.701 Normal NVM Subsystem Shutdown event: Not Supported 00:31:15.701 Zone Descriptor Change Notices: Not Supported 00:31:15.701 Discovery Log Change Notices: Not Supported 00:31:15.701 Controller Attributes 00:31:15.701 128-bit Host Identifier: Not Supported 00:31:15.701 Non-Operational Permissive Mode: Not Supported 00:31:15.701 NVM Sets: Not Supported 00:31:15.701 Read Recovery Levels: Not Supported 00:31:15.701 Endurance Groups: Not Supported 00:31:15.701 Predictable Latency Mode: Not Supported 00:31:15.701 Traffic Based Keep ALive: Not Supported 00:31:15.701 Namespace Granularity: Not Supported 00:31:15.701 SQ Associations: Not Supported 00:31:15.701 UUID List: Not Supported 00:31:15.701 Multi-Domain Subsystem: Not Supported 00:31:15.701 Fixed Capacity Management: Not Supported 00:31:15.701 Variable Capacity Management: Not Supported 00:31:15.701 Delete Endurance Group: Not Supported 00:31:15.701 Delete NVM Set: Not Supported 00:31:15.701 Extended LBA Formats Supported: Supported 00:31:15.701 Flexible Data Placement Supported: Not Supported 00:31:15.701 00:31:15.701 Controller Memory Buffer Support 00:31:15.701 ================================ 00:31:15.701 Supported: No 00:31:15.701 00:31:15.701 Persistent Memory Region Support 00:31:15.701 ================================ 00:31:15.701 Supported: No 00:31:15.701 00:31:15.701 Admin Command Set Attributes 00:31:15.701 ============================ 00:31:15.701 Security Send/Receive: Not Supported 00:31:15.701 Format NVM: Supported 00:31:15.701 Firmware Activate/Download: Not Supported 00:31:15.701 Namespace Management: Supported 00:31:15.701 Device Self-Test: Not Supported 00:31:15.701 Directives: Supported 00:31:15.701 NVMe-MI: Not Supported 00:31:15.701 Virtualization Management: Not Supported 00:31:15.701 Doorbell Buffer Config: Supported 00:31:15.701 Get LBA Status Capability: Not Supported 00:31:15.701 Command & Feature Lockdown Capability: Not Supported 00:31:15.701 Abort Command Limit: 4 00:31:15.701 Async Event Request Limit: 4 00:31:15.701 Number of Firmware Slots: N/A 00:31:15.701 Firmware Slot 1 Read-Only: N/A 00:31:15.701 Firmware Activation Without Reset: N/A 00:31:15.701 Multiple Update Detection Support: N/A 00:31:15.701 Firmware Update Granularity: No Information Provided 00:31:15.701 Per-Namespace SMART Log: Yes 00:31:15.701 Asymmetric Namespace Access Log Page: Not Supported 00:31:15.701 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:31:15.701 Command Effects Log Page: Supported 00:31:15.701 Get Log Page Extended Data: Supported 00:31:15.701 Telemetry Log Pages: Not Supported 00:31:15.701 Persistent Event Log Pages: Not Supported 00:31:15.701 Supported Log Pages Log Page: May Support 00:31:15.701 Commands Supported & Effects Log Page: Not Supported 00:31:15.701 Feature Identifiers & Effects Log Page:May Support 00:31:15.701 NVMe-MI Commands & Effects Log Page: May Support 00:31:15.701 Data Area 4 for Telemetry Log: Not Supported 00:31:15.701 Error Log Page Entries Supported: 1 00:31:15.702 Keep Alive: Not Supported 00:31:15.702 00:31:15.702 NVM Command Set Attributes 00:31:15.702 ========================== 00:31:15.702 Submission Queue Entry Size 00:31:15.702 Max: 64 00:31:15.702 Min: 64 00:31:15.702 Completion Queue Entry Size 00:31:15.702 Max: 16 00:31:15.702 Min: 16 00:31:15.702 Number of Namespaces: 256 00:31:15.702 Compare Command: Supported 00:31:15.702 Write Uncorrectable Command: Not Supported 00:31:15.702 Dataset Management Command: Supported 00:31:15.702 Write Zeroes Command: Supported 00:31:15.702 Set Features Save Field: Supported 00:31:15.702 Reservations: Not Supported 00:31:15.702 Timestamp: Supported 00:31:15.702 Copy: Supported 00:31:15.702 Volatile Write Cache: Present 00:31:15.702 Atomic Write Unit (Normal): 1 00:31:15.702 Atomic Write Unit (PFail): 1 00:31:15.702 Atomic Compare & Write Unit: 1 00:31:15.702 Fused Compare & Write: Not Supported 00:31:15.702 Scatter-Gather List 00:31:15.702 SGL Command Set: Supported 00:31:15.702 SGL Keyed: Not Supported 00:31:15.702 SGL Bit Bucket Descriptor: Not Supported 00:31:15.702 SGL Metadata Pointer: Not Supported 00:31:15.702 Oversized SGL: Not Supported 00:31:15.702 SGL Metadata Address: Not Supported 00:31:15.702 SGL Offset: Not Supported 00:31:15.702 Transport SGL Data Block: Not Supported 00:31:15.702 Replay Protected Memory Block: Not Supported 00:31:15.702 00:31:15.702 Firmware Slot Information 00:31:15.702 ========================= 00:31:15.702 Active slot: 1 00:31:15.702 Slot 1 Firmware Revision: 1.0 00:31:15.702 00:31:15.702 00:31:15.702 Commands Supported and Effects 00:31:15.702 ============================== 00:31:15.702 Admin Commands 00:31:15.702 -------------- 00:31:15.702 Delete I/O Submission Queue (00h): Supported 00:31:15.702 Create I/O Submission Queue (01h): Supported 00:31:15.702 Get Log Page (02h): Supported 00:31:15.702 Delete I/O Completion Queue (04h): Supported 00:31:15.702 Create I/O Completion Queue (05h): Supported 00:31:15.702 Identify (06h): Supported 00:31:15.702 Abort (08h): Supported 00:31:15.702 Set Features (09h): Supported 00:31:15.702 Get Features (0Ah): Supported 00:31:15.702 Asynchronous Event Request (0Ch): Supported 00:31:15.702 Namespace Attachment (15h): Supported NS-Inventory-Change 00:31:15.702 Directive Send (19h): Supported 00:31:15.702 Directive Receive (1Ah): Supported 00:31:15.702 Virtualization Management (1Ch): Supported 00:31:15.702 Doorbell Buffer Config (7Ch): Supported 00:31:15.702 Format NVM (80h): Supported LBA-Change 00:31:15.702 I/O Commands 00:31:15.702 ------------ 00:31:15.702 Flush (00h): Supported LBA-Change 00:31:15.702 Write (01h): Supported LBA-Change 00:31:15.702 Read (02h): Supported 00:31:15.702 Compare (05h): Supported 00:31:15.702 Write Zeroes (08h): Supported LBA-Change 00:31:15.702 Dataset Management (09h): Supported LBA-Change 00:31:15.702 Unknown (0Ch): Supported 00:31:15.702 Unknown (12h): Supported 00:31:15.702 Copy (19h): Supported LBA-Change 00:31:15.702 Unknown (1Dh): Supported LBA-Change 00:31:15.702 00:31:15.702 Error Log 00:31:15.702 ========= 00:31:15.702 00:31:15.702 Arbitration 00:31:15.702 =========== 00:31:15.702 Arbitration Burst: no limit 00:31:15.702 00:31:15.702 Power Management 00:31:15.702 ================ 00:31:15.702 Number of Power States: 1 00:31:15.702 Current Power State: Power State #0 00:31:15.702 Power State #0: 00:31:15.702 Max Power: 25.00 W 00:31:15.702 Non-Operational State: Operational 00:31:15.702 Entry Latency: 16 microseconds 00:31:15.702 Exit Latency: 4 microseconds 00:31:15.702 Relative Read Throughput: 0 00:31:15.702 Relative Read Latency: 0 00:31:15.702 Relative Write Throughput: 0 00:31:15.702 Relative Write Latency: 0 00:31:15.702 Idle Power: Not Reported 00:31:15.702 Active Power: Not Reported 00:31:15.702 Non-Operational Permissive Mode: Not Supported 00:31:15.702 00:31:15.702 Health Information 00:31:15.702 ================== 00:31:15.702 Critical Warnings: 00:31:15.702 Available Spare Space: OK 00:31:15.702 Temperature: OK 00:31:15.702 Device Reliability: OK 00:31:15.702 Read Only: No 00:31:15.702 Volatile Memory Backup: OK 00:31:15.702 Current Temperature: 323 Kelvin (50 Celsius) 00:31:15.702 Temperature Threshold: 343 Kelvin (70 Celsius) 00:31:15.702 Available Spare: 0% 00:31:15.702 Available Spare Threshold: 0% 00:31:15.702 Life Percentage Used: 0% 00:31:15.702 Data Units Read: 641 00:31:15.702 Data Units Written: 569 00:31:15.702 Host Read Commands: 30164 00:31:15.702 Host Write Commands: 29950 00:31:15.702 Controller Busy Time: 0 minutes 00:31:15.702 Power Cycles: 0 00:31:15.702 Power On Hours: 0 hours 00:31:15.702 Unsafe Shutdowns: 0 00:31:15.702 Unrecoverable Media Errors: 0 00:31:15.702 Lifetime Error Log Entries: 0 00:31:15.702 Warning Temperature Time: 0 minutes 00:31:15.702 Critical Temperature Time: 0 minutes 00:31:15.702 00:31:15.702 Number of Queues 00:31:15.702 ================ 00:31:15.702 Number of I/O Submission Queues: 64 00:31:15.702 Number of I/O Completion Queues: 64 00:31:15.702 00:31:15.702 ZNS Specific Controller Data 00:31:15.702 ============================ 00:31:15.702 Zone Append Size Limit: 0 00:31:15.702 00:31:15.702 00:31:15.702 Active Namespaces 00:31:15.702 ================= 00:31:15.702 Namespace ID:1 00:31:15.702 Error Recovery Timeout: Unlimited 00:31:15.702 Command Set Identifier: NVM (00h) 00:31:15.702 Deallocate: Supported 00:31:15.702 Deallocated/Unwritten Error: Supported 00:31:15.702 Deallocated Read Value: All 0x00 00:31:15.702 Deallocate in Write Zeroes: Not Supported 00:31:15.702 Deallocated Guard Field: 0xFFFF 00:31:15.702 Flush: Supported 00:31:15.702 Reservation: Not Supported 00:31:15.702 Metadata Transferred as: Separate Metadata Buffer 00:31:15.702 Namespace Sharing Capabilities: Private 00:31:15.702 Size (in LBAs): 1548666 (5GiB) 00:31:15.702 Capacity (in LBAs): 1548666 (5GiB) 00:31:15.702 Utilization (in LBAs): 1548666 (5GiB) 00:31:15.702 Thin Provisioning: Not Supported 00:31:15.702 Per-NS Atomic Units: No 00:31:15.702 Maximum Single Source Range Length: 128 00:31:15.702 Maximum Copy Length: 128 00:31:15.702 Maximum Source Range Count: 128 00:31:15.702 NGUID/EUI64 Never Reused: No 00:31:15.702 Namespace Write Protected: No 00:31:15.702 Number of LBA Formats: 8 00:31:15.702 Current LBA Format: LBA Format #07 00:31:15.702 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:15.702 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:15.702 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:15.702 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:15.702 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:15.702 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:15.702 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:15.702 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:15.702 00:31:15.702 NVM Specific Namespace Data 00:31:15.702 =========================== 00:31:15.702 Logical Block Storage Tag Mask: 0 00:31:15.702 Protection Information Capabilities: 00:31:15.702 16b Guard Protection Information Storage Tag Support: No 00:31:15.702 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:15.702 Storage Tag Check Read Support: No 00:31:15.702 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.702 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.702 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.702 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.702 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.702 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.702 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.702 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:15.702 13:26:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:31:15.702 13:26:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:31:16.343 ===================================================== 00:31:16.343 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:16.343 ===================================================== 00:31:16.343 Controller Capabilities/Features 00:31:16.343 ================================ 00:31:16.343 Vendor ID: 1b36 00:31:16.343 Subsystem Vendor ID: 1af4 00:31:16.343 Serial Number: 12341 00:31:16.343 Model Number: QEMU NVMe Ctrl 00:31:16.343 Firmware Version: 8.0.0 00:31:16.343 Recommended Arb Burst: 6 00:31:16.343 IEEE OUI Identifier: 00 54 52 00:31:16.343 Multi-path I/O 00:31:16.343 May have multiple subsystem ports: No 00:31:16.343 May have multiple controllers: No 00:31:16.343 Associated with SR-IOV VF: No 00:31:16.343 Max Data Transfer Size: 524288 00:31:16.343 Max Number of Namespaces: 256 00:31:16.343 Max Number of I/O Queues: 64 00:31:16.343 NVMe Specification Version (VS): 1.4 00:31:16.343 NVMe Specification Version (Identify): 1.4 00:31:16.343 Maximum Queue Entries: 2048 00:31:16.343 Contiguous Queues Required: Yes 00:31:16.343 Arbitration Mechanisms Supported 00:31:16.343 Weighted Round Robin: Not Supported 00:31:16.343 Vendor Specific: Not Supported 00:31:16.344 Reset Timeout: 7500 ms 00:31:16.344 Doorbell Stride: 4 bytes 00:31:16.344 NVM Subsystem Reset: Not Supported 00:31:16.344 Command Sets Supported 00:31:16.344 NVM Command Set: Supported 00:31:16.344 Boot Partition: Not Supported 00:31:16.344 Memory Page Size Minimum: 4096 bytes 00:31:16.344 Memory Page Size Maximum: 65536 bytes 00:31:16.344 Persistent Memory Region: Not Supported 00:31:16.344 Optional Asynchronous Events Supported 00:31:16.344 Namespace Attribute Notices: Supported 00:31:16.344 Firmware Activation Notices: Not Supported 00:31:16.344 ANA Change Notices: Not Supported 00:31:16.344 PLE Aggregate Log Change Notices: Not Supported 00:31:16.344 LBA Status Info Alert Notices: Not Supported 00:31:16.344 EGE Aggregate Log Change Notices: Not Supported 00:31:16.344 Normal NVM Subsystem Shutdown event: Not Supported 00:31:16.344 Zone Descriptor Change Notices: Not Supported 00:31:16.344 Discovery Log Change Notices: Not Supported 00:31:16.344 Controller Attributes 00:31:16.344 128-bit Host Identifier: Not Supported 00:31:16.344 Non-Operational Permissive Mode: Not Supported 00:31:16.344 NVM Sets: Not Supported 00:31:16.344 Read Recovery Levels: Not Supported 00:31:16.344 Endurance Groups: Not Supported 00:31:16.344 Predictable Latency Mode: Not Supported 00:31:16.344 Traffic Based Keep ALive: Not Supported 00:31:16.344 Namespace Granularity: Not Supported 00:31:16.344 SQ Associations: Not Supported 00:31:16.344 UUID List: Not Supported 00:31:16.344 Multi-Domain Subsystem: Not Supported 00:31:16.344 Fixed Capacity Management: Not Supported 00:31:16.344 Variable Capacity Management: Not Supported 00:31:16.344 Delete Endurance Group: Not Supported 00:31:16.344 Delete NVM Set: Not Supported 00:31:16.344 Extended LBA Formats Supported: Supported 00:31:16.344 Flexible Data Placement Supported: Not Supported 00:31:16.344 00:31:16.344 Controller Memory Buffer Support 00:31:16.344 ================================ 00:31:16.344 Supported: No 00:31:16.344 00:31:16.344 Persistent Memory Region Support 00:31:16.344 ================================ 00:31:16.344 Supported: No 00:31:16.344 00:31:16.344 Admin Command Set Attributes 00:31:16.344 ============================ 00:31:16.344 Security Send/Receive: Not Supported 00:31:16.344 Format NVM: Supported 00:31:16.344 Firmware Activate/Download: Not Supported 00:31:16.344 Namespace Management: Supported 00:31:16.344 Device Self-Test: Not Supported 00:31:16.344 Directives: Supported 00:31:16.344 NVMe-MI: Not Supported 00:31:16.344 Virtualization Management: Not Supported 00:31:16.344 Doorbell Buffer Config: Supported 00:31:16.344 Get LBA Status Capability: Not Supported 00:31:16.344 Command & Feature Lockdown Capability: Not Supported 00:31:16.344 Abort Command Limit: 4 00:31:16.344 Async Event Request Limit: 4 00:31:16.344 Number of Firmware Slots: N/A 00:31:16.344 Firmware Slot 1 Read-Only: N/A 00:31:16.344 Firmware Activation Without Reset: N/A 00:31:16.344 Multiple Update Detection Support: N/A 00:31:16.344 Firmware Update Granularity: No Information Provided 00:31:16.344 Per-Namespace SMART Log: Yes 00:31:16.344 Asymmetric Namespace Access Log Page: Not Supported 00:31:16.344 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:31:16.344 Command Effects Log Page: Supported 00:31:16.344 Get Log Page Extended Data: Supported 00:31:16.344 Telemetry Log Pages: Not Supported 00:31:16.344 Persistent Event Log Pages: Not Supported 00:31:16.344 Supported Log Pages Log Page: May Support 00:31:16.344 Commands Supported & Effects Log Page: Not Supported 00:31:16.344 Feature Identifiers & Effects Log Page:May Support 00:31:16.344 NVMe-MI Commands & Effects Log Page: May Support 00:31:16.344 Data Area 4 for Telemetry Log: Not Supported 00:31:16.344 Error Log Page Entries Supported: 1 00:31:16.344 Keep Alive: Not Supported 00:31:16.344 00:31:16.344 NVM Command Set Attributes 00:31:16.344 ========================== 00:31:16.344 Submission Queue Entry Size 00:31:16.344 Max: 64 00:31:16.344 Min: 64 00:31:16.344 Completion Queue Entry Size 00:31:16.344 Max: 16 00:31:16.344 Min: 16 00:31:16.344 Number of Namespaces: 256 00:31:16.344 Compare Command: Supported 00:31:16.344 Write Uncorrectable Command: Not Supported 00:31:16.344 Dataset Management Command: Supported 00:31:16.344 Write Zeroes Command: Supported 00:31:16.344 Set Features Save Field: Supported 00:31:16.344 Reservations: Not Supported 00:31:16.344 Timestamp: Supported 00:31:16.344 Copy: Supported 00:31:16.344 Volatile Write Cache: Present 00:31:16.344 Atomic Write Unit (Normal): 1 00:31:16.344 Atomic Write Unit (PFail): 1 00:31:16.344 Atomic Compare & Write Unit: 1 00:31:16.344 Fused Compare & Write: Not Supported 00:31:16.344 Scatter-Gather List 00:31:16.344 SGL Command Set: Supported 00:31:16.344 SGL Keyed: Not Supported 00:31:16.344 SGL Bit Bucket Descriptor: Not Supported 00:31:16.344 SGL Metadata Pointer: Not Supported 00:31:16.344 Oversized SGL: Not Supported 00:31:16.344 SGL Metadata Address: Not Supported 00:31:16.344 SGL Offset: Not Supported 00:31:16.344 Transport SGL Data Block: Not Supported 00:31:16.344 Replay Protected Memory Block: Not Supported 00:31:16.344 00:31:16.344 Firmware Slot Information 00:31:16.344 ========================= 00:31:16.344 Active slot: 1 00:31:16.344 Slot 1 Firmware Revision: 1.0 00:31:16.344 00:31:16.344 00:31:16.344 Commands Supported and Effects 00:31:16.344 ============================== 00:31:16.344 Admin Commands 00:31:16.344 -------------- 00:31:16.344 Delete I/O Submission Queue (00h): Supported 00:31:16.344 Create I/O Submission Queue (01h): Supported 00:31:16.344 Get Log Page (02h): Supported 00:31:16.344 Delete I/O Completion Queue (04h): Supported 00:31:16.344 Create I/O Completion Queue (05h): Supported 00:31:16.344 Identify (06h): Supported 00:31:16.344 Abort (08h): Supported 00:31:16.344 Set Features (09h): Supported 00:31:16.344 Get Features (0Ah): Supported 00:31:16.344 Asynchronous Event Request (0Ch): Supported 00:31:16.344 Namespace Attachment (15h): Supported NS-Inventory-Change 00:31:16.344 Directive Send (19h): Supported 00:31:16.344 Directive Receive (1Ah): Supported 00:31:16.344 Virtualization Management (1Ch): Supported 00:31:16.344 Doorbell Buffer Config (7Ch): Supported 00:31:16.344 Format NVM (80h): Supported LBA-Change 00:31:16.344 I/O Commands 00:31:16.344 ------------ 00:31:16.344 Flush (00h): Supported LBA-Change 00:31:16.344 Write (01h): Supported LBA-Change 00:31:16.344 Read (02h): Supported 00:31:16.344 Compare (05h): Supported 00:31:16.344 Write Zeroes (08h): Supported LBA-Change 00:31:16.344 Dataset Management (09h): Supported LBA-Change 00:31:16.344 Unknown (0Ch): Supported 00:31:16.344 Unknown (12h): Supported 00:31:16.344 Copy (19h): Supported LBA-Change 00:31:16.344 Unknown (1Dh): Supported LBA-Change 00:31:16.344 00:31:16.344 Error Log 00:31:16.344 ========= 00:31:16.344 00:31:16.344 Arbitration 00:31:16.344 =========== 00:31:16.344 Arbitration Burst: no limit 00:31:16.344 00:31:16.344 Power Management 00:31:16.344 ================ 00:31:16.344 Number of Power States: 1 00:31:16.344 Current Power State: Power State #0 00:31:16.344 Power State #0: 00:31:16.344 Max Power: 25.00 W 00:31:16.344 Non-Operational State: Operational 00:31:16.344 Entry Latency: 16 microseconds 00:31:16.344 Exit Latency: 4 microseconds 00:31:16.344 Relative Read Throughput: 0 00:31:16.344 Relative Read Latency: 0 00:31:16.344 Relative Write Throughput: 0 00:31:16.344 Relative Write Latency: 0 00:31:16.344 Idle Power: Not Reported 00:31:16.344 Active Power: Not Reported 00:31:16.344 Non-Operational Permissive Mode: Not Supported 00:31:16.344 00:31:16.344 Health Information 00:31:16.344 ================== 00:31:16.344 Critical Warnings: 00:31:16.344 Available Spare Space: OK 00:31:16.344 Temperature: OK 00:31:16.344 Device Reliability: OK 00:31:16.344 Read Only: No 00:31:16.344 Volatile Memory Backup: OK 00:31:16.344 Current Temperature: 323 Kelvin (50 Celsius) 00:31:16.344 Temperature Threshold: 343 Kelvin (70 Celsius) 00:31:16.344 Available Spare: 0% 00:31:16.344 Available Spare Threshold: 0% 00:31:16.344 Life Percentage Used: 0% 00:31:16.344 Data Units Read: 939 00:31:16.344 Data Units Written: 806 00:31:16.344 Host Read Commands: 44421 00:31:16.344 Host Write Commands: 43205 00:31:16.344 Controller Busy Time: 0 minutes 00:31:16.344 Power Cycles: 0 00:31:16.344 Power On Hours: 0 hours 00:31:16.344 Unsafe Shutdowns: 0 00:31:16.344 Unrecoverable Media Errors: 0 00:31:16.344 Lifetime Error Log Entries: 0 00:31:16.344 Warning Temperature Time: 0 minutes 00:31:16.344 Critical Temperature Time: 0 minutes 00:31:16.344 00:31:16.344 Number of Queues 00:31:16.344 ================ 00:31:16.344 Number of I/O Submission Queues: 64 00:31:16.344 Number of I/O Completion Queues: 64 00:31:16.344 00:31:16.344 ZNS Specific Controller Data 00:31:16.344 ============================ 00:31:16.344 Zone Append Size Limit: 0 00:31:16.344 00:31:16.344 00:31:16.344 Active Namespaces 00:31:16.344 ================= 00:31:16.344 Namespace ID:1 00:31:16.344 Error Recovery Timeout: Unlimited 00:31:16.344 Command Set Identifier: NVM (00h) 00:31:16.344 Deallocate: Supported 00:31:16.345 Deallocated/Unwritten Error: Supported 00:31:16.345 Deallocated Read Value: All 0x00 00:31:16.345 Deallocate in Write Zeroes: Not Supported 00:31:16.345 Deallocated Guard Field: 0xFFFF 00:31:16.345 Flush: Supported 00:31:16.345 Reservation: Not Supported 00:31:16.345 Namespace Sharing Capabilities: Private 00:31:16.345 Size (in LBAs): 1310720 (5GiB) 00:31:16.345 Capacity (in LBAs): 1310720 (5GiB) 00:31:16.345 Utilization (in LBAs): 1310720 (5GiB) 00:31:16.345 Thin Provisioning: Not Supported 00:31:16.345 Per-NS Atomic Units: No 00:31:16.345 Maximum Single Source Range Length: 128 00:31:16.345 Maximum Copy Length: 128 00:31:16.345 Maximum Source Range Count: 128 00:31:16.345 NGUID/EUI64 Never Reused: No 00:31:16.345 Namespace Write Protected: No 00:31:16.345 Number of LBA Formats: 8 00:31:16.345 Current LBA Format: LBA Format #04 00:31:16.345 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:16.345 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:16.345 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:16.345 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:16.345 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:16.345 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:16.345 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:16.345 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:16.345 00:31:16.345 NVM Specific Namespace Data 00:31:16.345 =========================== 00:31:16.345 Logical Block Storage Tag Mask: 0 00:31:16.345 Protection Information Capabilities: 00:31:16.345 16b Guard Protection Information Storage Tag Support: No 00:31:16.345 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:16.345 Storage Tag Check Read Support: No 00:31:16.345 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.345 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.345 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.345 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.345 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.345 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.345 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.345 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.345 13:26:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:31:16.345 13:26:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:31:16.620 ===================================================== 00:31:16.620 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:16.621 ===================================================== 00:31:16.621 Controller Capabilities/Features 00:31:16.621 ================================ 00:31:16.621 Vendor ID: 1b36 00:31:16.621 Subsystem Vendor ID: 1af4 00:31:16.621 Serial Number: 12342 00:31:16.621 Model Number: QEMU NVMe Ctrl 00:31:16.621 Firmware Version: 8.0.0 00:31:16.621 Recommended Arb Burst: 6 00:31:16.621 IEEE OUI Identifier: 00 54 52 00:31:16.621 Multi-path I/O 00:31:16.621 May have multiple subsystem ports: No 00:31:16.621 May have multiple controllers: No 00:31:16.621 Associated with SR-IOV VF: No 00:31:16.621 Max Data Transfer Size: 524288 00:31:16.621 Max Number of Namespaces: 256 00:31:16.621 Max Number of I/O Queues: 64 00:31:16.621 NVMe Specification Version (VS): 1.4 00:31:16.621 NVMe Specification Version (Identify): 1.4 00:31:16.621 Maximum Queue Entries: 2048 00:31:16.621 Contiguous Queues Required: Yes 00:31:16.621 Arbitration Mechanisms Supported 00:31:16.621 Weighted Round Robin: Not Supported 00:31:16.621 Vendor Specific: Not Supported 00:31:16.621 Reset Timeout: 7500 ms 00:31:16.621 Doorbell Stride: 4 bytes 00:31:16.621 NVM Subsystem Reset: Not Supported 00:31:16.621 Command Sets Supported 00:31:16.621 NVM Command Set: Supported 00:31:16.621 Boot Partition: Not Supported 00:31:16.621 Memory Page Size Minimum: 4096 bytes 00:31:16.621 Memory Page Size Maximum: 65536 bytes 00:31:16.621 Persistent Memory Region: Not Supported 00:31:16.621 Optional Asynchronous Events Supported 00:31:16.621 Namespace Attribute Notices: Supported 00:31:16.621 Firmware Activation Notices: Not Supported 00:31:16.621 ANA Change Notices: Not Supported 00:31:16.621 PLE Aggregate Log Change Notices: Not Supported 00:31:16.621 LBA Status Info Alert Notices: Not Supported 00:31:16.621 EGE Aggregate Log Change Notices: Not Supported 00:31:16.621 Normal NVM Subsystem Shutdown event: Not Supported 00:31:16.621 Zone Descriptor Change Notices: Not Supported 00:31:16.621 Discovery Log Change Notices: Not Supported 00:31:16.621 Controller Attributes 00:31:16.621 128-bit Host Identifier: Not Supported 00:31:16.621 Non-Operational Permissive Mode: Not Supported 00:31:16.621 NVM Sets: Not Supported 00:31:16.621 Read Recovery Levels: Not Supported 00:31:16.621 Endurance Groups: Not Supported 00:31:16.621 Predictable Latency Mode: Not Supported 00:31:16.621 Traffic Based Keep ALive: Not Supported 00:31:16.621 Namespace Granularity: Not Supported 00:31:16.621 SQ Associations: Not Supported 00:31:16.621 UUID List: Not Supported 00:31:16.621 Multi-Domain Subsystem: Not Supported 00:31:16.621 Fixed Capacity Management: Not Supported 00:31:16.621 Variable Capacity Management: Not Supported 00:31:16.621 Delete Endurance Group: Not Supported 00:31:16.621 Delete NVM Set: Not Supported 00:31:16.621 Extended LBA Formats Supported: Supported 00:31:16.621 Flexible Data Placement Supported: Not Supported 00:31:16.621 00:31:16.621 Controller Memory Buffer Support 00:31:16.621 ================================ 00:31:16.621 Supported: No 00:31:16.621 00:31:16.621 Persistent Memory Region Support 00:31:16.621 ================================ 00:31:16.621 Supported: No 00:31:16.621 00:31:16.621 Admin Command Set Attributes 00:31:16.621 ============================ 00:31:16.621 Security Send/Receive: Not Supported 00:31:16.621 Format NVM: Supported 00:31:16.621 Firmware Activate/Download: Not Supported 00:31:16.621 Namespace Management: Supported 00:31:16.621 Device Self-Test: Not Supported 00:31:16.621 Directives: Supported 00:31:16.621 NVMe-MI: Not Supported 00:31:16.621 Virtualization Management: Not Supported 00:31:16.621 Doorbell Buffer Config: Supported 00:31:16.621 Get LBA Status Capability: Not Supported 00:31:16.621 Command & Feature Lockdown Capability: Not Supported 00:31:16.621 Abort Command Limit: 4 00:31:16.621 Async Event Request Limit: 4 00:31:16.621 Number of Firmware Slots: N/A 00:31:16.621 Firmware Slot 1 Read-Only: N/A 00:31:16.621 Firmware Activation Without Reset: N/A 00:31:16.621 Multiple Update Detection Support: N/A 00:31:16.621 Firmware Update Granularity: No Information Provided 00:31:16.621 Per-Namespace SMART Log: Yes 00:31:16.621 Asymmetric Namespace Access Log Page: Not Supported 00:31:16.621 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:31:16.621 Command Effects Log Page: Supported 00:31:16.621 Get Log Page Extended Data: Supported 00:31:16.621 Telemetry Log Pages: Not Supported 00:31:16.621 Persistent Event Log Pages: Not Supported 00:31:16.621 Supported Log Pages Log Page: May Support 00:31:16.621 Commands Supported & Effects Log Page: Not Supported 00:31:16.621 Feature Identifiers & Effects Log Page:May Support 00:31:16.621 NVMe-MI Commands & Effects Log Page: May Support 00:31:16.621 Data Area 4 for Telemetry Log: Not Supported 00:31:16.621 Error Log Page Entries Supported: 1 00:31:16.621 Keep Alive: Not Supported 00:31:16.621 00:31:16.621 NVM Command Set Attributes 00:31:16.621 ========================== 00:31:16.621 Submission Queue Entry Size 00:31:16.621 Max: 64 00:31:16.621 Min: 64 00:31:16.621 Completion Queue Entry Size 00:31:16.621 Max: 16 00:31:16.621 Min: 16 00:31:16.621 Number of Namespaces: 256 00:31:16.621 Compare Command: Supported 00:31:16.621 Write Uncorrectable Command: Not Supported 00:31:16.621 Dataset Management Command: Supported 00:31:16.621 Write Zeroes Command: Supported 00:31:16.621 Set Features Save Field: Supported 00:31:16.621 Reservations: Not Supported 00:31:16.621 Timestamp: Supported 00:31:16.621 Copy: Supported 00:31:16.621 Volatile Write Cache: Present 00:31:16.621 Atomic Write Unit (Normal): 1 00:31:16.621 Atomic Write Unit (PFail): 1 00:31:16.621 Atomic Compare & Write Unit: 1 00:31:16.621 Fused Compare & Write: Not Supported 00:31:16.621 Scatter-Gather List 00:31:16.621 SGL Command Set: Supported 00:31:16.621 SGL Keyed: Not Supported 00:31:16.621 SGL Bit Bucket Descriptor: Not Supported 00:31:16.621 SGL Metadata Pointer: Not Supported 00:31:16.621 Oversized SGL: Not Supported 00:31:16.621 SGL Metadata Address: Not Supported 00:31:16.621 SGL Offset: Not Supported 00:31:16.621 Transport SGL Data Block: Not Supported 00:31:16.621 Replay Protected Memory Block: Not Supported 00:31:16.621 00:31:16.621 Firmware Slot Information 00:31:16.621 ========================= 00:31:16.621 Active slot: 1 00:31:16.621 Slot 1 Firmware Revision: 1.0 00:31:16.621 00:31:16.621 00:31:16.621 Commands Supported and Effects 00:31:16.621 ============================== 00:31:16.621 Admin Commands 00:31:16.621 -------------- 00:31:16.621 Delete I/O Submission Queue (00h): Supported 00:31:16.621 Create I/O Submission Queue (01h): Supported 00:31:16.621 Get Log Page (02h): Supported 00:31:16.621 Delete I/O Completion Queue (04h): Supported 00:31:16.621 Create I/O Completion Queue (05h): Supported 00:31:16.621 Identify (06h): Supported 00:31:16.621 Abort (08h): Supported 00:31:16.621 Set Features (09h): Supported 00:31:16.621 Get Features (0Ah): Supported 00:31:16.621 Asynchronous Event Request (0Ch): Supported 00:31:16.621 Namespace Attachment (15h): Supported NS-Inventory-Change 00:31:16.621 Directive Send (19h): Supported 00:31:16.621 Directive Receive (1Ah): Supported 00:31:16.621 Virtualization Management (1Ch): Supported 00:31:16.621 Doorbell Buffer Config (7Ch): Supported 00:31:16.621 Format NVM (80h): Supported LBA-Change 00:31:16.621 I/O Commands 00:31:16.621 ------------ 00:31:16.621 Flush (00h): Supported LBA-Change 00:31:16.621 Write (01h): Supported LBA-Change 00:31:16.621 Read (02h): Supported 00:31:16.621 Compare (05h): Supported 00:31:16.621 Write Zeroes (08h): Supported LBA-Change 00:31:16.621 Dataset Management (09h): Supported LBA-Change 00:31:16.621 Unknown (0Ch): Supported 00:31:16.621 Unknown (12h): Supported 00:31:16.621 Copy (19h): Supported LBA-Change 00:31:16.621 Unknown (1Dh): Supported LBA-Change 00:31:16.621 00:31:16.621 Error Log 00:31:16.621 ========= 00:31:16.621 00:31:16.621 Arbitration 00:31:16.621 =========== 00:31:16.621 Arbitration Burst: no limit 00:31:16.621 00:31:16.621 Power Management 00:31:16.621 ================ 00:31:16.621 Number of Power States: 1 00:31:16.621 Current Power State: Power State #0 00:31:16.621 Power State #0: 00:31:16.621 Max Power: 25.00 W 00:31:16.621 Non-Operational State: Operational 00:31:16.621 Entry Latency: 16 microseconds 00:31:16.621 Exit Latency: 4 microseconds 00:31:16.622 Relative Read Throughput: 0 00:31:16.622 Relative Read Latency: 0 00:31:16.622 Relative Write Throughput: 0 00:31:16.622 Relative Write Latency: 0 00:31:16.622 Idle Power: Not Reported 00:31:16.622 Active Power: Not Reported 00:31:16.622 Non-Operational Permissive Mode: Not Supported 00:31:16.622 00:31:16.622 Health Information 00:31:16.622 ================== 00:31:16.622 Critical Warnings: 00:31:16.622 Available Spare Space: OK 00:31:16.622 Temperature: OK 00:31:16.622 Device Reliability: OK 00:31:16.622 Read Only: No 00:31:16.622 Volatile Memory Backup: OK 00:31:16.622 Current Temperature: 323 Kelvin (50 Celsius) 00:31:16.622 Temperature Threshold: 343 Kelvin (70 Celsius) 00:31:16.622 Available Spare: 0% 00:31:16.622 Available Spare Threshold: 0% 00:31:16.622 Life Percentage Used: 0% 00:31:16.622 Data Units Read: 1995 00:31:16.622 Data Units Written: 1782 00:31:16.622 Host Read Commands: 92026 00:31:16.622 Host Write Commands: 90295 00:31:16.622 Controller Busy Time: 0 minutes 00:31:16.622 Power Cycles: 0 00:31:16.622 Power On Hours: 0 hours 00:31:16.622 Unsafe Shutdowns: 0 00:31:16.622 Unrecoverable Media Errors: 0 00:31:16.622 Lifetime Error Log Entries: 0 00:31:16.622 Warning Temperature Time: 0 minutes 00:31:16.622 Critical Temperature Time: 0 minutes 00:31:16.622 00:31:16.622 Number of Queues 00:31:16.622 ================ 00:31:16.622 Number of I/O Submission Queues: 64 00:31:16.622 Number of I/O Completion Queues: 64 00:31:16.622 00:31:16.622 ZNS Specific Controller Data 00:31:16.622 ============================ 00:31:16.622 Zone Append Size Limit: 0 00:31:16.622 00:31:16.622 00:31:16.622 Active Namespaces 00:31:16.622 ================= 00:31:16.622 Namespace ID:1 00:31:16.622 Error Recovery Timeout: Unlimited 00:31:16.622 Command Set Identifier: NVM (00h) 00:31:16.622 Deallocate: Supported 00:31:16.622 Deallocated/Unwritten Error: Supported 00:31:16.622 Deallocated Read Value: All 0x00 00:31:16.622 Deallocate in Write Zeroes: Not Supported 00:31:16.622 Deallocated Guard Field: 0xFFFF 00:31:16.622 Flush: Supported 00:31:16.622 Reservation: Not Supported 00:31:16.622 Namespace Sharing Capabilities: Private 00:31:16.622 Size (in LBAs): 1048576 (4GiB) 00:31:16.622 Capacity (in LBAs): 1048576 (4GiB) 00:31:16.622 Utilization (in LBAs): 1048576 (4GiB) 00:31:16.622 Thin Provisioning: Not Supported 00:31:16.622 Per-NS Atomic Units: No 00:31:16.622 Maximum Single Source Range Length: 128 00:31:16.622 Maximum Copy Length: 128 00:31:16.622 Maximum Source Range Count: 128 00:31:16.622 NGUID/EUI64 Never Reused: No 00:31:16.622 Namespace Write Protected: No 00:31:16.622 Number of LBA Formats: 8 00:31:16.622 Current LBA Format: LBA Format #04 00:31:16.622 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:16.622 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:16.622 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:16.622 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:16.622 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:16.622 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:16.622 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:16.622 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:16.622 00:31:16.622 NVM Specific Namespace Data 00:31:16.622 =========================== 00:31:16.622 Logical Block Storage Tag Mask: 0 00:31:16.622 Protection Information Capabilities: 00:31:16.622 16b Guard Protection Information Storage Tag Support: No 00:31:16.622 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:16.622 Storage Tag Check Read Support: No 00:31:16.622 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Namespace ID:2 00:31:16.622 Error Recovery Timeout: Unlimited 00:31:16.622 Command Set Identifier: NVM (00h) 00:31:16.622 Deallocate: Supported 00:31:16.622 Deallocated/Unwritten Error: Supported 00:31:16.622 Deallocated Read Value: All 0x00 00:31:16.622 Deallocate in Write Zeroes: Not Supported 00:31:16.622 Deallocated Guard Field: 0xFFFF 00:31:16.622 Flush: Supported 00:31:16.622 Reservation: Not Supported 00:31:16.622 Namespace Sharing Capabilities: Private 00:31:16.622 Size (in LBAs): 1048576 (4GiB) 00:31:16.622 Capacity (in LBAs): 1048576 (4GiB) 00:31:16.622 Utilization (in LBAs): 1048576 (4GiB) 00:31:16.622 Thin Provisioning: Not Supported 00:31:16.622 Per-NS Atomic Units: No 00:31:16.622 Maximum Single Source Range Length: 128 00:31:16.622 Maximum Copy Length: 128 00:31:16.622 Maximum Source Range Count: 128 00:31:16.622 NGUID/EUI64 Never Reused: No 00:31:16.622 Namespace Write Protected: No 00:31:16.622 Number of LBA Formats: 8 00:31:16.622 Current LBA Format: LBA Format #04 00:31:16.622 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:16.622 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:16.622 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:16.622 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:16.622 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:16.622 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:16.622 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:16.622 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:16.622 00:31:16.622 NVM Specific Namespace Data 00:31:16.622 =========================== 00:31:16.622 Logical Block Storage Tag Mask: 0 00:31:16.622 Protection Information Capabilities: 00:31:16.622 16b Guard Protection Information Storage Tag Support: No 00:31:16.622 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:16.622 Storage Tag Check Read Support: No 00:31:16.622 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Namespace ID:3 00:31:16.622 Error Recovery Timeout: Unlimited 00:31:16.622 Command Set Identifier: NVM (00h) 00:31:16.622 Deallocate: Supported 00:31:16.622 Deallocated/Unwritten Error: Supported 00:31:16.622 Deallocated Read Value: All 0x00 00:31:16.622 Deallocate in Write Zeroes: Not Supported 00:31:16.622 Deallocated Guard Field: 0xFFFF 00:31:16.622 Flush: Supported 00:31:16.622 Reservation: Not Supported 00:31:16.622 Namespace Sharing Capabilities: Private 00:31:16.622 Size (in LBAs): 1048576 (4GiB) 00:31:16.622 Capacity (in LBAs): 1048576 (4GiB) 00:31:16.622 Utilization (in LBAs): 1048576 (4GiB) 00:31:16.622 Thin Provisioning: Not Supported 00:31:16.622 Per-NS Atomic Units: No 00:31:16.622 Maximum Single Source Range Length: 128 00:31:16.622 Maximum Copy Length: 128 00:31:16.622 Maximum Source Range Count: 128 00:31:16.622 NGUID/EUI64 Never Reused: No 00:31:16.622 Namespace Write Protected: No 00:31:16.622 Number of LBA Formats: 8 00:31:16.622 Current LBA Format: LBA Format #04 00:31:16.622 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:16.622 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:16.622 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:16.622 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:16.622 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:16.622 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:16.622 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:16.622 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:16.622 00:31:16.622 NVM Specific Namespace Data 00:31:16.622 =========================== 00:31:16.622 Logical Block Storage Tag Mask: 0 00:31:16.622 Protection Information Capabilities: 00:31:16.622 16b Guard Protection Information Storage Tag Support: No 00:31:16.622 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:16.622 Storage Tag Check Read Support: No 00:31:16.622 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.622 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.623 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.623 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.623 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.623 13:26:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:31:16.623 13:26:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:31:16.881 ===================================================== 00:31:16.882 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:16.882 ===================================================== 00:31:16.882 Controller Capabilities/Features 00:31:16.882 ================================ 00:31:16.882 Vendor ID: 1b36 00:31:16.882 Subsystem Vendor ID: 1af4 00:31:16.882 Serial Number: 12343 00:31:16.882 Model Number: QEMU NVMe Ctrl 00:31:16.882 Firmware Version: 8.0.0 00:31:16.882 Recommended Arb Burst: 6 00:31:16.882 IEEE OUI Identifier: 00 54 52 00:31:16.882 Multi-path I/O 00:31:16.882 May have multiple subsystem ports: No 00:31:16.882 May have multiple controllers: Yes 00:31:16.882 Associated with SR-IOV VF: No 00:31:16.882 Max Data Transfer Size: 524288 00:31:16.882 Max Number of Namespaces: 256 00:31:16.882 Max Number of I/O Queues: 64 00:31:16.882 NVMe Specification Version (VS): 1.4 00:31:16.882 NVMe Specification Version (Identify): 1.4 00:31:16.882 Maximum Queue Entries: 2048 00:31:16.882 Contiguous Queues Required: Yes 00:31:16.882 Arbitration Mechanisms Supported 00:31:16.882 Weighted Round Robin: Not Supported 00:31:16.882 Vendor Specific: Not Supported 00:31:16.882 Reset Timeout: 7500 ms 00:31:16.882 Doorbell Stride: 4 bytes 00:31:16.882 NVM Subsystem Reset: Not Supported 00:31:16.882 Command Sets Supported 00:31:16.882 NVM Command Set: Supported 00:31:16.882 Boot Partition: Not Supported 00:31:16.882 Memory Page Size Minimum: 4096 bytes 00:31:16.882 Memory Page Size Maximum: 65536 bytes 00:31:16.882 Persistent Memory Region: Not Supported 00:31:16.882 Optional Asynchronous Events Supported 00:31:16.882 Namespace Attribute Notices: Supported 00:31:16.882 Firmware Activation Notices: Not Supported 00:31:16.882 ANA Change Notices: Not Supported 00:31:16.882 PLE Aggregate Log Change Notices: Not Supported 00:31:16.882 LBA Status Info Alert Notices: Not Supported 00:31:16.882 EGE Aggregate Log Change Notices: Not Supported 00:31:16.882 Normal NVM Subsystem Shutdown event: Not Supported 00:31:16.882 Zone Descriptor Change Notices: Not Supported 00:31:16.882 Discovery Log Change Notices: Not Supported 00:31:16.882 Controller Attributes 00:31:16.882 128-bit Host Identifier: Not Supported 00:31:16.882 Non-Operational Permissive Mode: Not Supported 00:31:16.882 NVM Sets: Not Supported 00:31:16.882 Read Recovery Levels: Not Supported 00:31:16.882 Endurance Groups: Supported 00:31:16.882 Predictable Latency Mode: Not Supported 00:31:16.882 Traffic Based Keep ALive: Not Supported 00:31:16.882 Namespace Granularity: Not Supported 00:31:16.882 SQ Associations: Not Supported 00:31:16.882 UUID List: Not Supported 00:31:16.882 Multi-Domain Subsystem: Not Supported 00:31:16.882 Fixed Capacity Management: Not Supported 00:31:16.882 Variable Capacity Management: Not Supported 00:31:16.882 Delete Endurance Group: Not Supported 00:31:16.882 Delete NVM Set: Not Supported 00:31:16.882 Extended LBA Formats Supported: Supported 00:31:16.882 Flexible Data Placement Supported: Supported 00:31:16.882 00:31:16.882 Controller Memory Buffer Support 00:31:16.882 ================================ 00:31:16.882 Supported: No 00:31:16.882 00:31:16.882 Persistent Memory Region Support 00:31:16.882 ================================ 00:31:16.882 Supported: No 00:31:16.882 00:31:16.882 Admin Command Set Attributes 00:31:16.882 ============================ 00:31:16.882 Security Send/Receive: Not Supported 00:31:16.882 Format NVM: Supported 00:31:16.882 Firmware Activate/Download: Not Supported 00:31:16.882 Namespace Management: Supported 00:31:16.882 Device Self-Test: Not Supported 00:31:16.882 Directives: Supported 00:31:16.882 NVMe-MI: Not Supported 00:31:16.882 Virtualization Management: Not Supported 00:31:16.882 Doorbell Buffer Config: Supported 00:31:16.882 Get LBA Status Capability: Not Supported 00:31:16.882 Command & Feature Lockdown Capability: Not Supported 00:31:16.882 Abort Command Limit: 4 00:31:16.882 Async Event Request Limit: 4 00:31:16.882 Number of Firmware Slots: N/A 00:31:16.882 Firmware Slot 1 Read-Only: N/A 00:31:16.882 Firmware Activation Without Reset: N/A 00:31:16.882 Multiple Update Detection Support: N/A 00:31:16.882 Firmware Update Granularity: No Information Provided 00:31:16.882 Per-Namespace SMART Log: Yes 00:31:16.882 Asymmetric Namespace Access Log Page: Not Supported 00:31:16.882 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:31:16.882 Command Effects Log Page: Supported 00:31:16.882 Get Log Page Extended Data: Supported 00:31:16.882 Telemetry Log Pages: Not Supported 00:31:16.882 Persistent Event Log Pages: Not Supported 00:31:16.882 Supported Log Pages Log Page: May Support 00:31:16.882 Commands Supported & Effects Log Page: Not Supported 00:31:16.882 Feature Identifiers & Effects Log Page:May Support 00:31:16.882 NVMe-MI Commands & Effects Log Page: May Support 00:31:16.882 Data Area 4 for Telemetry Log: Not Supported 00:31:16.882 Error Log Page Entries Supported: 1 00:31:16.882 Keep Alive: Not Supported 00:31:16.882 00:31:16.882 NVM Command Set Attributes 00:31:16.882 ========================== 00:31:16.882 Submission Queue Entry Size 00:31:16.882 Max: 64 00:31:16.882 Min: 64 00:31:16.882 Completion Queue Entry Size 00:31:16.882 Max: 16 00:31:16.882 Min: 16 00:31:16.882 Number of Namespaces: 256 00:31:16.882 Compare Command: Supported 00:31:16.882 Write Uncorrectable Command: Not Supported 00:31:16.882 Dataset Management Command: Supported 00:31:16.882 Write Zeroes Command: Supported 00:31:16.882 Set Features Save Field: Supported 00:31:16.882 Reservations: Not Supported 00:31:16.882 Timestamp: Supported 00:31:16.882 Copy: Supported 00:31:16.882 Volatile Write Cache: Present 00:31:16.882 Atomic Write Unit (Normal): 1 00:31:16.882 Atomic Write Unit (PFail): 1 00:31:16.882 Atomic Compare & Write Unit: 1 00:31:16.882 Fused Compare & Write: Not Supported 00:31:16.882 Scatter-Gather List 00:31:16.882 SGL Command Set: Supported 00:31:16.882 SGL Keyed: Not Supported 00:31:16.882 SGL Bit Bucket Descriptor: Not Supported 00:31:16.882 SGL Metadata Pointer: Not Supported 00:31:16.882 Oversized SGL: Not Supported 00:31:16.882 SGL Metadata Address: Not Supported 00:31:16.882 SGL Offset: Not Supported 00:31:16.882 Transport SGL Data Block: Not Supported 00:31:16.882 Replay Protected Memory Block: Not Supported 00:31:16.882 00:31:16.882 Firmware Slot Information 00:31:16.882 ========================= 00:31:16.882 Active slot: 1 00:31:16.882 Slot 1 Firmware Revision: 1.0 00:31:16.882 00:31:16.882 00:31:16.882 Commands Supported and Effects 00:31:16.882 ============================== 00:31:16.882 Admin Commands 00:31:16.882 -------------- 00:31:16.882 Delete I/O Submission Queue (00h): Supported 00:31:16.882 Create I/O Submission Queue (01h): Supported 00:31:16.882 Get Log Page (02h): Supported 00:31:16.882 Delete I/O Completion Queue (04h): Supported 00:31:16.882 Create I/O Completion Queue (05h): Supported 00:31:16.882 Identify (06h): Supported 00:31:16.882 Abort (08h): Supported 00:31:16.882 Set Features (09h): Supported 00:31:16.882 Get Features (0Ah): Supported 00:31:16.882 Asynchronous Event Request (0Ch): Supported 00:31:16.882 Namespace Attachment (15h): Supported NS-Inventory-Change 00:31:16.882 Directive Send (19h): Supported 00:31:16.882 Directive Receive (1Ah): Supported 00:31:16.882 Virtualization Management (1Ch): Supported 00:31:16.882 Doorbell Buffer Config (7Ch): Supported 00:31:16.882 Format NVM (80h): Supported LBA-Change 00:31:16.882 I/O Commands 00:31:16.882 ------------ 00:31:16.882 Flush (00h): Supported LBA-Change 00:31:16.882 Write (01h): Supported LBA-Change 00:31:16.882 Read (02h): Supported 00:31:16.882 Compare (05h): Supported 00:31:16.882 Write Zeroes (08h): Supported LBA-Change 00:31:16.882 Dataset Management (09h): Supported LBA-Change 00:31:16.882 Unknown (0Ch): Supported 00:31:16.882 Unknown (12h): Supported 00:31:16.882 Copy (19h): Supported LBA-Change 00:31:16.882 Unknown (1Dh): Supported LBA-Change 00:31:16.882 00:31:16.882 Error Log 00:31:16.882 ========= 00:31:16.882 00:31:16.882 Arbitration 00:31:16.882 =========== 00:31:16.882 Arbitration Burst: no limit 00:31:16.882 00:31:16.882 Power Management 00:31:16.882 ================ 00:31:16.882 Number of Power States: 1 00:31:16.882 Current Power State: Power State #0 00:31:16.882 Power State #0: 00:31:16.882 Max Power: 25.00 W 00:31:16.882 Non-Operational State: Operational 00:31:16.882 Entry Latency: 16 microseconds 00:31:16.883 Exit Latency: 4 microseconds 00:31:16.883 Relative Read Throughput: 0 00:31:16.883 Relative Read Latency: 0 00:31:16.883 Relative Write Throughput: 0 00:31:16.883 Relative Write Latency: 0 00:31:16.883 Idle Power: Not Reported 00:31:16.883 Active Power: Not Reported 00:31:16.883 Non-Operational Permissive Mode: Not Supported 00:31:16.883 00:31:16.883 Health Information 00:31:16.883 ================== 00:31:16.883 Critical Warnings: 00:31:16.883 Available Spare Space: OK 00:31:16.883 Temperature: OK 00:31:16.883 Device Reliability: OK 00:31:16.883 Read Only: No 00:31:16.883 Volatile Memory Backup: OK 00:31:16.883 Current Temperature: 323 Kelvin (50 Celsius) 00:31:16.883 Temperature Threshold: 343 Kelvin (70 Celsius) 00:31:16.883 Available Spare: 0% 00:31:16.883 Available Spare Threshold: 0% 00:31:16.883 Life Percentage Used: 0% 00:31:16.883 Data Units Read: 716 00:31:16.883 Data Units Written: 645 00:31:16.883 Host Read Commands: 31113 00:31:16.883 Host Write Commands: 30536 00:31:16.883 Controller Busy Time: 0 minutes 00:31:16.883 Power Cycles: 0 00:31:16.883 Power On Hours: 0 hours 00:31:16.883 Unsafe Shutdowns: 0 00:31:16.883 Unrecoverable Media Errors: 0 00:31:16.883 Lifetime Error Log Entries: 0 00:31:16.883 Warning Temperature Time: 0 minutes 00:31:16.883 Critical Temperature Time: 0 minutes 00:31:16.883 00:31:16.883 Number of Queues 00:31:16.883 ================ 00:31:16.883 Number of I/O Submission Queues: 64 00:31:16.883 Number of I/O Completion Queues: 64 00:31:16.883 00:31:16.883 ZNS Specific Controller Data 00:31:16.883 ============================ 00:31:16.883 Zone Append Size Limit: 0 00:31:16.883 00:31:16.883 00:31:16.883 Active Namespaces 00:31:16.883 ================= 00:31:16.883 Namespace ID:1 00:31:16.883 Error Recovery Timeout: Unlimited 00:31:16.883 Command Set Identifier: NVM (00h) 00:31:16.883 Deallocate: Supported 00:31:16.883 Deallocated/Unwritten Error: Supported 00:31:16.883 Deallocated Read Value: All 0x00 00:31:16.883 Deallocate in Write Zeroes: Not Supported 00:31:16.883 Deallocated Guard Field: 0xFFFF 00:31:16.883 Flush: Supported 00:31:16.883 Reservation: Not Supported 00:31:16.883 Namespace Sharing Capabilities: Multiple Controllers 00:31:16.883 Size (in LBAs): 262144 (1GiB) 00:31:16.883 Capacity (in LBAs): 262144 (1GiB) 00:31:16.883 Utilization (in LBAs): 262144 (1GiB) 00:31:16.883 Thin Provisioning: Not Supported 00:31:16.883 Per-NS Atomic Units: No 00:31:16.883 Maximum Single Source Range Length: 128 00:31:16.883 Maximum Copy Length: 128 00:31:16.883 Maximum Source Range Count: 128 00:31:16.883 NGUID/EUI64 Never Reused: No 00:31:16.883 Namespace Write Protected: No 00:31:16.883 Endurance group ID: 1 00:31:16.883 Number of LBA Formats: 8 00:31:16.883 Current LBA Format: LBA Format #04 00:31:16.883 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:16.883 LBA Format #01: Data Size: 512 Metadata Size: 8 00:31:16.883 LBA Format #02: Data Size: 512 Metadata Size: 16 00:31:16.883 LBA Format #03: Data Size: 512 Metadata Size: 64 00:31:16.883 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:31:16.883 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:31:16.883 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:31:16.883 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:31:16.883 00:31:16.883 Get Feature FDP: 00:31:16.883 ================ 00:31:16.883 Enabled: Yes 00:31:16.883 FDP configuration index: 0 00:31:16.883 00:31:16.883 FDP configurations log page 00:31:16.883 =========================== 00:31:16.883 Number of FDP configurations: 1 00:31:16.883 Version: 0 00:31:16.883 Size: 112 00:31:16.883 FDP Configuration Descriptor: 0 00:31:16.883 Descriptor Size: 96 00:31:16.883 Reclaim Group Identifier format: 2 00:31:16.883 FDP Volatile Write Cache: Not Present 00:31:16.883 FDP Configuration: Valid 00:31:16.883 Vendor Specific Size: 0 00:31:16.883 Number of Reclaim Groups: 2 00:31:16.883 Number of Recalim Unit Handles: 8 00:31:16.883 Max Placement Identifiers: 128 00:31:16.883 Number of Namespaces Suppprted: 256 00:31:16.883 Reclaim unit Nominal Size: 6000000 bytes 00:31:16.883 Estimated Reclaim Unit Time Limit: Not Reported 00:31:16.883 RUH Desc #000: RUH Type: Initially Isolated 00:31:16.883 RUH Desc #001: RUH Type: Initially Isolated 00:31:16.883 RUH Desc #002: RUH Type: Initially Isolated 00:31:16.883 RUH Desc #003: RUH Type: Initially Isolated 00:31:16.883 RUH Desc #004: RUH Type: Initially Isolated 00:31:16.883 RUH Desc #005: RUH Type: Initially Isolated 00:31:16.883 RUH Desc #006: RUH Type: Initially Isolated 00:31:16.883 RUH Desc #007: RUH Type: Initially Isolated 00:31:16.883 00:31:16.883 FDP reclaim unit handle usage log page 00:31:16.883 ====================================== 00:31:16.883 Number of Reclaim Unit Handles: 8 00:31:16.883 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:31:16.883 RUH Usage Desc #001: RUH Attributes: Unused 00:31:16.883 RUH Usage Desc #002: RUH Attributes: Unused 00:31:16.883 RUH Usage Desc #003: RUH Attributes: Unused 00:31:16.883 RUH Usage Desc #004: RUH Attributes: Unused 00:31:16.883 RUH Usage Desc #005: RUH Attributes: Unused 00:31:16.883 RUH Usage Desc #006: RUH Attributes: Unused 00:31:16.883 RUH Usage Desc #007: RUH Attributes: Unused 00:31:16.883 00:31:16.883 FDP statistics log page 00:31:16.883 ======================= 00:31:16.883 Host bytes with metadata written: 410267648 00:31:16.883 Media bytes with metadata written: 410320896 00:31:16.883 Media bytes erased: 0 00:31:16.883 00:31:16.883 FDP events log page 00:31:16.883 =================== 00:31:16.883 Number of FDP events: 0 00:31:16.883 00:31:16.883 NVM Specific Namespace Data 00:31:16.883 =========================== 00:31:16.883 Logical Block Storage Tag Mask: 0 00:31:16.883 Protection Information Capabilities: 00:31:16.883 16b Guard Protection Information Storage Tag Support: No 00:31:16.883 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:31:16.883 Storage Tag Check Read Support: No 00:31:16.883 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.883 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.883 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.883 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.883 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.883 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.883 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.883 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:31:16.883 00:31:16.883 real 0m2.088s 00:31:16.883 user 0m0.741s 00:31:16.883 sys 0m1.105s 00:31:16.883 13:26:09 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:16.883 ************************************ 00:31:16.883 END TEST nvme_identify 00:31:16.883 ************************************ 00:31:16.883 13:26:09 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:31:16.883 13:26:09 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:31:16.883 13:26:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:16.883 13:26:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:16.883 13:26:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:16.883 ************************************ 00:31:16.883 START TEST nvme_perf 00:31:16.883 ************************************ 00:31:16.883 13:26:09 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:31:16.883 13:26:09 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:31:18.262 Initializing NVMe Controllers 00:31:18.262 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:18.262 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:18.262 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:18.262 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:18.262 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:18.262 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:18.262 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:18.262 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:18.262 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:18.262 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:18.262 Initialization complete. Launching workers. 00:31:18.262 ======================================================== 00:31:18.262 Latency(us) 00:31:18.262 Device Information : IOPS MiB/s Average min max 00:31:18.262 PCIE (0000:00:10.0) NSID 1 from core 0: 10701.49 125.41 12015.46 8407.83 49186.33 00:31:18.262 PCIE (0000:00:11.0) NSID 1 from core 0: 10701.49 125.41 11989.85 8490.46 45673.91 00:31:18.262 PCIE (0000:00:13.0) NSID 1 from core 0: 10701.49 125.41 11959.88 8530.04 43208.15 00:31:18.262 PCIE (0000:00:12.0) NSID 1 from core 0: 10701.49 125.41 11930.04 8506.45 40027.95 00:31:18.262 PCIE (0000:00:12.0) NSID 2 from core 0: 10701.49 125.41 11900.11 8474.57 36847.41 00:31:18.262 PCIE (0000:00:12.0) NSID 3 from core 0: 10701.49 125.41 11869.91 8473.54 33484.98 00:31:18.262 ======================================================== 00:31:18.262 Total : 64208.93 752.45 11944.21 8407.83 49186.33 00:31:18.262 00:31:18.262 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:31:18.262 ================================================================================= 00:31:18.262 1.00000% : 8738.133us 00:31:18.262 10.00000% : 9487.116us 00:31:18.262 25.00000% : 10360.930us 00:31:18.262 50.00000% : 11297.158us 00:31:18.262 75.00000% : 12483.048us 00:31:18.262 90.00000% : 14542.750us 00:31:18.262 95.00000% : 16852.114us 00:31:18.262 98.00000% : 18599.741us 00:31:18.262 99.00000% : 37199.482us 00:31:18.262 99.50000% : 46686.598us 00:31:18.262 99.90000% : 48933.547us 00:31:18.262 99.99000% : 49183.208us 00:31:18.262 99.99900% : 49432.869us 00:31:18.262 99.99990% : 49432.869us 00:31:18.262 99.99999% : 49432.869us 00:31:18.262 00:31:18.262 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:31:18.262 ================================================================================= 00:31:18.262 1.00000% : 8862.964us 00:31:18.262 10.00000% : 9487.116us 00:31:18.262 25.00000% : 10298.514us 00:31:18.262 50.00000% : 11297.158us 00:31:18.262 75.00000% : 12545.463us 00:31:18.262 90.00000% : 14667.581us 00:31:18.262 95.00000% : 17101.775us 00:31:18.262 98.00000% : 19223.893us 00:31:18.262 99.00000% : 34702.872us 00:31:18.262 99.50000% : 43690.667us 00:31:18.262 99.90000% : 45438.293us 00:31:18.262 99.99000% : 45687.954us 00:31:18.262 99.99900% : 45687.954us 00:31:18.262 99.99990% : 45687.954us 00:31:18.262 99.99999% : 45687.954us 00:31:18.262 00:31:18.262 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:31:18.262 ================================================================================= 00:31:18.262 1.00000% : 8862.964us 00:31:18.262 10.00000% : 9549.531us 00:31:18.262 25.00000% : 10360.930us 00:31:18.262 50.00000% : 11297.158us 00:31:18.262 75.00000% : 12607.878us 00:31:18.262 90.00000% : 14542.750us 00:31:18.262 95.00000% : 16352.792us 00:31:18.262 98.00000% : 18849.402us 00:31:18.262 99.00000% : 32705.585us 00:31:18.262 99.50000% : 41194.057us 00:31:18.262 99.90000% : 42941.684us 00:31:18.262 99.99000% : 43191.345us 00:31:18.262 99.99900% : 43441.006us 00:31:18.262 99.99990% : 43441.006us 00:31:18.262 99.99999% : 43441.006us 00:31:18.262 00:31:18.262 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:31:18.262 ================================================================================= 00:31:18.262 1.00000% : 8862.964us 00:31:18.262 10.00000% : 9549.531us 00:31:18.262 25.00000% : 10360.930us 00:31:18.262 50.00000% : 11297.158us 00:31:18.262 75.00000% : 12607.878us 00:31:18.262 90.00000% : 14355.505us 00:31:18.262 95.00000% : 16477.623us 00:31:18.262 98.00000% : 19348.724us 00:31:18.262 99.00000% : 29584.823us 00:31:18.262 99.50000% : 37948.465us 00:31:18.262 99.90000% : 39696.091us 00:31:18.262 99.99000% : 40195.413us 00:31:18.262 99.99900% : 40195.413us 00:31:18.262 99.99990% : 40195.413us 00:31:18.262 99.99999% : 40195.413us 00:31:18.262 00:31:18.262 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:31:18.262 ================================================================================= 00:31:18.262 1.00000% : 8862.964us 00:31:18.262 10.00000% : 9549.531us 00:31:18.262 25.00000% : 10360.930us 00:31:18.262 50.00000% : 11297.158us 00:31:18.262 75.00000% : 12607.878us 00:31:18.262 90.00000% : 14605.166us 00:31:18.262 95.00000% : 16477.623us 00:31:18.262 98.00000% : 19723.215us 00:31:18.262 99.00000% : 26339.230us 00:31:18.262 99.50000% : 34702.872us 00:31:18.262 99.90000% : 36450.499us 00:31:18.262 99.99000% : 36949.821us 00:31:18.262 99.99900% : 36949.821us 00:31:18.262 99.99990% : 36949.821us 00:31:18.262 99.99999% : 36949.821us 00:31:18.262 00:31:18.262 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:31:18.262 ================================================================================= 00:31:18.262 1.00000% : 8862.964us 00:31:18.262 10.00000% : 9487.116us 00:31:18.262 25.00000% : 10298.514us 00:31:18.262 50.00000% : 11297.158us 00:31:18.262 75.00000% : 12607.878us 00:31:18.262 90.00000% : 14667.581us 00:31:18.262 95.00000% : 16727.284us 00:31:18.262 98.00000% : 20222.537us 00:31:18.262 99.00000% : 23218.469us 00:31:18.262 99.50000% : 31332.450us 00:31:18.262 99.90000% : 33204.907us 00:31:18.262 99.99000% : 33454.568us 00:31:18.262 99.99900% : 33704.229us 00:31:18.262 99.99990% : 33704.229us 00:31:18.262 99.99999% : 33704.229us 00:31:18.262 00:31:18.262 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:31:18.262 ============================================================================== 00:31:18.262 Range in us Cumulative IO count 00:31:18.262 8363.642 - 8426.057: 0.0372% ( 4) 00:31:18.262 8426.057 - 8488.472: 0.1488% ( 12) 00:31:18.262 8488.472 - 8550.888: 0.2883% ( 15) 00:31:18.262 8550.888 - 8613.303: 0.4557% ( 18) 00:31:18.262 8613.303 - 8675.718: 0.7254% ( 29) 00:31:18.262 8675.718 - 8738.133: 1.0045% ( 30) 00:31:18.262 8738.133 - 8800.549: 1.4044% ( 43) 00:31:18.262 8800.549 - 8862.964: 1.8973% ( 53) 00:31:18.262 8862.964 - 8925.379: 2.4368% ( 58) 00:31:18.262 8925.379 - 8987.794: 3.1064% ( 72) 00:31:18.262 8987.794 - 9050.210: 3.8132% ( 76) 00:31:18.262 9050.210 - 9112.625: 4.5852% ( 83) 00:31:18.262 9112.625 - 9175.040: 5.4967% ( 98) 00:31:18.262 9175.040 - 9237.455: 6.5197% ( 110) 00:31:18.262 9237.455 - 9299.870: 7.5800% ( 114) 00:31:18.262 9299.870 - 9362.286: 8.6961% ( 120) 00:31:18.262 9362.286 - 9424.701: 9.6819% ( 106) 00:31:18.262 9424.701 - 9487.116: 10.6585% ( 105) 00:31:18.262 9487.116 - 9549.531: 11.8118% ( 124) 00:31:18.262 9549.531 - 9611.947: 12.9929% ( 127) 00:31:18.262 9611.947 - 9674.362: 14.1462% ( 124) 00:31:18.262 9674.362 - 9736.777: 15.2809% ( 122) 00:31:18.262 9736.777 - 9799.192: 16.5737% ( 139) 00:31:18.262 9799.192 - 9861.608: 17.8013% ( 132) 00:31:18.262 9861.608 - 9924.023: 18.9825% ( 127) 00:31:18.262 9924.023 - 9986.438: 20.1451% ( 125) 00:31:18.262 9986.438 - 10048.853: 21.2333% ( 117) 00:31:18.262 10048.853 - 10111.269: 22.2284% ( 107) 00:31:18.262 10111.269 - 10173.684: 23.1957% ( 104) 00:31:18.262 10173.684 - 10236.099: 24.1071% ( 98) 00:31:18.262 10236.099 - 10298.514: 24.9814% ( 94) 00:31:18.262 10298.514 - 10360.930: 25.8092% ( 89) 00:31:18.262 10360.930 - 10423.345: 26.7113% ( 97) 00:31:18.262 10423.345 - 10485.760: 27.6042% ( 96) 00:31:18.262 10485.760 - 10548.175: 28.7760% ( 126) 00:31:18.263 10548.175 - 10610.590: 30.1060% ( 143) 00:31:18.263 10610.590 - 10673.006: 31.5197% ( 152) 00:31:18.263 10673.006 - 10735.421: 33.0822% ( 168) 00:31:18.263 10735.421 - 10797.836: 34.7935% ( 184) 00:31:18.263 10797.836 - 10860.251: 36.6908% ( 204) 00:31:18.263 10860.251 - 10922.667: 38.4952% ( 194) 00:31:18.263 10922.667 - 10985.082: 40.4576% ( 211) 00:31:18.263 10985.082 - 11047.497: 42.5409% ( 224) 00:31:18.263 11047.497 - 11109.912: 44.6522% ( 227) 00:31:18.263 11109.912 - 11172.328: 46.8750% ( 239) 00:31:18.263 11172.328 - 11234.743: 49.0420% ( 233) 00:31:18.263 11234.743 - 11297.158: 51.2649% ( 239) 00:31:18.263 11297.158 - 11359.573: 53.4040% ( 230) 00:31:18.263 11359.573 - 11421.989: 55.4501% ( 220) 00:31:18.263 11421.989 - 11484.404: 57.4777% ( 218) 00:31:18.263 11484.404 - 11546.819: 59.5610% ( 224) 00:31:18.263 11546.819 - 11609.234: 61.5141% ( 210) 00:31:18.263 11609.234 - 11671.650: 63.2068% ( 182) 00:31:18.263 11671.650 - 11734.065: 64.6577% ( 156) 00:31:18.263 11734.065 - 11796.480: 66.1365% ( 159) 00:31:18.263 11796.480 - 11858.895: 67.5130% ( 148) 00:31:18.263 11858.895 - 11921.310: 68.7314% ( 131) 00:31:18.263 11921.310 - 11983.726: 69.7545% ( 110) 00:31:18.263 11983.726 - 12046.141: 70.6752% ( 99) 00:31:18.263 12046.141 - 12108.556: 71.4193% ( 80) 00:31:18.263 12108.556 - 12170.971: 72.0517% ( 68) 00:31:18.263 12170.971 - 12233.387: 72.6097% ( 60) 00:31:18.263 12233.387 - 12295.802: 73.2329% ( 67) 00:31:18.263 12295.802 - 12358.217: 73.8188% ( 63) 00:31:18.263 12358.217 - 12420.632: 74.4327% ( 66) 00:31:18.263 12420.632 - 12483.048: 75.0093% ( 62) 00:31:18.263 12483.048 - 12545.463: 75.6138% ( 65) 00:31:18.263 12545.463 - 12607.878: 76.1719% ( 60) 00:31:18.263 12607.878 - 12670.293: 76.8136% ( 69) 00:31:18.263 12670.293 - 12732.709: 77.4089% ( 64) 00:31:18.263 12732.709 - 12795.124: 77.9576% ( 59) 00:31:18.263 12795.124 - 12857.539: 78.4970% ( 58) 00:31:18.263 12857.539 - 12919.954: 78.9156% ( 45) 00:31:18.263 12919.954 - 12982.370: 79.3806% ( 50) 00:31:18.263 12982.370 - 13044.785: 79.8456% ( 50) 00:31:18.263 13044.785 - 13107.200: 80.3850% ( 58) 00:31:18.263 13107.200 - 13169.615: 80.8873% ( 54) 00:31:18.263 13169.615 - 13232.030: 81.4081% ( 56) 00:31:18.263 13232.030 - 13294.446: 81.9289% ( 56) 00:31:18.263 13294.446 - 13356.861: 82.5335% ( 65) 00:31:18.263 13356.861 - 13419.276: 83.1473% ( 66) 00:31:18.263 13419.276 - 13481.691: 83.7240% ( 62) 00:31:18.263 13481.691 - 13544.107: 84.1983% ( 51) 00:31:18.263 13544.107 - 13606.522: 84.7098% ( 55) 00:31:18.263 13606.522 - 13668.937: 85.2121% ( 54) 00:31:18.263 13668.937 - 13731.352: 85.5748% ( 39) 00:31:18.263 13731.352 - 13793.768: 85.9747% ( 43) 00:31:18.263 13793.768 - 13856.183: 86.3653% ( 42) 00:31:18.263 13856.183 - 13918.598: 86.7188% ( 38) 00:31:18.263 13918.598 - 13981.013: 87.0815% ( 39) 00:31:18.263 13981.013 - 14043.429: 87.4814% ( 43) 00:31:18.263 14043.429 - 14105.844: 87.8441% ( 39) 00:31:18.263 14105.844 - 14168.259: 88.2533% ( 44) 00:31:18.263 14168.259 - 14230.674: 88.6068% ( 38) 00:31:18.263 14230.674 - 14293.090: 88.9881% ( 41) 00:31:18.263 14293.090 - 14355.505: 89.2857% ( 32) 00:31:18.263 14355.505 - 14417.920: 89.5740% ( 31) 00:31:18.263 14417.920 - 14480.335: 89.8717% ( 32) 00:31:18.263 14480.335 - 14542.750: 90.1135% ( 26) 00:31:18.263 14542.750 - 14605.166: 90.3553% ( 26) 00:31:18.263 14605.166 - 14667.581: 90.5413% ( 20) 00:31:18.263 14667.581 - 14729.996: 90.6901% ( 16) 00:31:18.263 14729.996 - 14792.411: 90.8482% ( 17) 00:31:18.263 14792.411 - 14854.827: 91.0156% ( 18) 00:31:18.263 14854.827 - 14917.242: 91.1551% ( 15) 00:31:18.263 14917.242 - 14979.657: 91.3318% ( 19) 00:31:18.263 14979.657 - 15042.072: 91.4807% ( 16) 00:31:18.263 15042.072 - 15104.488: 91.5737% ( 10) 00:31:18.263 15104.488 - 15166.903: 91.6574% ( 9) 00:31:18.263 15166.903 - 15229.318: 91.7225% ( 7) 00:31:18.263 15229.318 - 15291.733: 91.8248% ( 11) 00:31:18.263 15291.733 - 15354.149: 91.9178% ( 10) 00:31:18.263 15354.149 - 15416.564: 92.0201% ( 11) 00:31:18.263 15416.564 - 15478.979: 92.1224% ( 11) 00:31:18.263 15478.979 - 15541.394: 92.2340% ( 12) 00:31:18.263 15541.394 - 15603.810: 92.3363% ( 11) 00:31:18.263 15603.810 - 15666.225: 92.4479% ( 12) 00:31:18.263 15666.225 - 15728.640: 92.5316% ( 9) 00:31:18.263 15728.640 - 15791.055: 92.6060% ( 8) 00:31:18.263 15791.055 - 15853.470: 92.7176% ( 12) 00:31:18.263 15853.470 - 15915.886: 92.8478% ( 14) 00:31:18.263 15915.886 - 15978.301: 92.9874% ( 15) 00:31:18.263 15978.301 - 16103.131: 93.2850% ( 32) 00:31:18.263 16103.131 - 16227.962: 93.6105% ( 35) 00:31:18.263 16227.962 - 16352.792: 93.9639% ( 38) 00:31:18.263 16352.792 - 16477.623: 94.3080% ( 37) 00:31:18.263 16477.623 - 16602.453: 94.6522% ( 37) 00:31:18.263 16602.453 - 16727.284: 94.9777% ( 35) 00:31:18.263 16727.284 - 16852.114: 95.3311% ( 38) 00:31:18.263 16852.114 - 16976.945: 95.6659% ( 36) 00:31:18.263 16976.945 - 17101.775: 95.9821% ( 34) 00:31:18.263 17101.775 - 17226.606: 96.2333% ( 27) 00:31:18.263 17226.606 - 17351.436: 96.4751% ( 26) 00:31:18.263 17351.436 - 17476.267: 96.6797% ( 22) 00:31:18.263 17476.267 - 17601.097: 96.8843% ( 22) 00:31:18.263 17601.097 - 17725.928: 97.0796% ( 21) 00:31:18.263 17725.928 - 17850.758: 97.3121% ( 25) 00:31:18.263 17850.758 - 17975.589: 97.4702% ( 17) 00:31:18.263 17975.589 - 18100.419: 97.6004% ( 14) 00:31:18.263 18100.419 - 18225.250: 97.7214% ( 13) 00:31:18.263 18225.250 - 18350.080: 97.8516% ( 14) 00:31:18.263 18350.080 - 18474.910: 97.9167% ( 7) 00:31:18.263 18474.910 - 18599.741: 98.0004% ( 9) 00:31:18.263 18599.741 - 18724.571: 98.0934% ( 10) 00:31:18.263 18724.571 - 18849.402: 98.1213% ( 3) 00:31:18.263 18849.402 - 18974.232: 98.1492% ( 3) 00:31:18.263 18974.232 - 19099.063: 98.1864% ( 4) 00:31:18.263 19099.063 - 19223.893: 98.2143% ( 3) 00:31:18.263 19473.554 - 19598.385: 98.2422% ( 3) 00:31:18.263 19598.385 - 19723.215: 98.2701% ( 3) 00:31:18.263 19723.215 - 19848.046: 98.3073% ( 4) 00:31:18.263 19848.046 - 19972.876: 98.3259% ( 2) 00:31:18.263 19972.876 - 20097.707: 98.3631% ( 4) 00:31:18.263 20097.707 - 20222.537: 98.3817% ( 2) 00:31:18.263 20222.537 - 20347.368: 98.4189% ( 4) 00:31:18.263 20347.368 - 20472.198: 98.4468% ( 3) 00:31:18.263 20472.198 - 20597.029: 98.4747% ( 3) 00:31:18.263 20597.029 - 20721.859: 98.4933% ( 2) 00:31:18.263 20721.859 - 20846.690: 98.5305% ( 4) 00:31:18.263 20846.690 - 20971.520: 98.5584% ( 3) 00:31:18.263 20971.520 - 21096.350: 98.5956% ( 4) 00:31:18.263 21096.350 - 21221.181: 98.6235% ( 3) 00:31:18.263 21221.181 - 21346.011: 98.6514% ( 3) 00:31:18.263 21346.011 - 21470.842: 98.6793% ( 3) 00:31:18.263 21470.842 - 21595.672: 98.7165% ( 4) 00:31:18.263 21595.672 - 21720.503: 98.7444% ( 3) 00:31:18.263 21720.503 - 21845.333: 98.7816% ( 4) 00:31:18.263 21845.333 - 21970.164: 98.8095% ( 3) 00:31:18.263 35951.177 - 36200.838: 98.8281% ( 2) 00:31:18.263 36200.838 - 36450.499: 98.8746% ( 5) 00:31:18.263 36450.499 - 36700.160: 98.9211% ( 5) 00:31:18.263 36700.160 - 36949.821: 98.9583% ( 4) 00:31:18.263 36949.821 - 37199.482: 99.0141% ( 6) 00:31:18.263 37199.482 - 37449.143: 99.0606% ( 5) 00:31:18.263 37449.143 - 37698.804: 99.0978% ( 4) 00:31:18.263 37698.804 - 37948.465: 99.1536% ( 6) 00:31:18.263 37948.465 - 38198.126: 99.2001% ( 5) 00:31:18.263 38198.126 - 38447.787: 99.2467% ( 5) 00:31:18.263 38447.787 - 38697.448: 99.3025% ( 6) 00:31:18.263 38697.448 - 38947.109: 99.3490% ( 5) 00:31:18.263 38947.109 - 39196.770: 99.3862% ( 4) 00:31:18.263 39196.770 - 39446.430: 99.4048% ( 2) 00:31:18.263 45937.615 - 46187.276: 99.4327% ( 3) 00:31:18.263 46187.276 - 46436.937: 99.4792% ( 5) 00:31:18.263 46436.937 - 46686.598: 99.5257% ( 5) 00:31:18.263 46686.598 - 46936.259: 99.5722% ( 5) 00:31:18.263 46936.259 - 47185.920: 99.6280% ( 6) 00:31:18.263 47185.920 - 47435.581: 99.6652% ( 4) 00:31:18.263 47435.581 - 47685.242: 99.7210% ( 6) 00:31:18.263 47685.242 - 47934.903: 99.7675% ( 5) 00:31:18.263 47934.903 - 48184.564: 99.8047% ( 4) 00:31:18.263 48184.564 - 48434.225: 99.8512% ( 5) 00:31:18.263 48434.225 - 48683.886: 99.8977% ( 5) 00:31:18.263 48683.886 - 48933.547: 99.9535% ( 6) 00:31:18.263 48933.547 - 49183.208: 99.9907% ( 4) 00:31:18.263 49183.208 - 49432.869: 100.0000% ( 1) 00:31:18.263 00:31:18.263 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:31:18.263 ============================================================================== 00:31:18.263 Range in us Cumulative IO count 00:31:18.263 8488.472 - 8550.888: 0.0837% ( 9) 00:31:18.263 8550.888 - 8613.303: 0.2046% ( 13) 00:31:18.263 8613.303 - 8675.718: 0.3627% ( 17) 00:31:18.263 8675.718 - 8738.133: 0.5766% ( 23) 00:31:18.263 8738.133 - 8800.549: 0.8557% ( 30) 00:31:18.263 8800.549 - 8862.964: 1.2463% ( 42) 00:31:18.263 8862.964 - 8925.379: 1.7113% ( 50) 00:31:18.263 8925.379 - 8987.794: 2.3251% ( 66) 00:31:18.263 8987.794 - 9050.210: 3.0227% ( 75) 00:31:18.263 9050.210 - 9112.625: 3.8411% ( 88) 00:31:18.263 9112.625 - 9175.040: 4.7061% ( 93) 00:31:18.263 9175.040 - 9237.455: 5.6641% ( 103) 00:31:18.263 9237.455 - 9299.870: 6.7894% ( 121) 00:31:18.263 9299.870 - 9362.286: 7.9706% ( 127) 00:31:18.263 9362.286 - 9424.701: 9.1890% ( 131) 00:31:18.264 9424.701 - 9487.116: 10.5097% ( 142) 00:31:18.264 9487.116 - 9549.531: 11.8490% ( 144) 00:31:18.264 9549.531 - 9611.947: 13.1417% ( 139) 00:31:18.264 9611.947 - 9674.362: 14.4252% ( 138) 00:31:18.264 9674.362 - 9736.777: 15.8389% ( 152) 00:31:18.264 9736.777 - 9799.192: 17.1596% ( 142) 00:31:18.264 9799.192 - 9861.608: 18.4989% ( 144) 00:31:18.264 9861.608 - 9924.023: 19.8010% ( 140) 00:31:18.264 9924.023 - 9986.438: 21.0379% ( 133) 00:31:18.264 9986.438 - 10048.853: 22.2005% ( 125) 00:31:18.264 10048.853 - 10111.269: 23.1399% ( 101) 00:31:18.264 10111.269 - 10173.684: 24.0513% ( 98) 00:31:18.264 10173.684 - 10236.099: 24.8419% ( 85) 00:31:18.264 10236.099 - 10298.514: 25.5952% ( 81) 00:31:18.264 10298.514 - 10360.930: 26.3486% ( 81) 00:31:18.264 10360.930 - 10423.345: 27.1019% ( 81) 00:31:18.264 10423.345 - 10485.760: 27.9018% ( 86) 00:31:18.264 10485.760 - 10548.175: 28.8225% ( 99) 00:31:18.264 10548.175 - 10610.590: 29.8456% ( 110) 00:31:18.264 10610.590 - 10673.006: 31.1570% ( 141) 00:31:18.264 10673.006 - 10735.421: 32.5986% ( 155) 00:31:18.264 10735.421 - 10797.836: 34.3285% ( 186) 00:31:18.264 10797.836 - 10860.251: 36.1514% ( 196) 00:31:18.264 10860.251 - 10922.667: 38.0859% ( 208) 00:31:18.264 10922.667 - 10985.082: 40.1135% ( 218) 00:31:18.264 10985.082 - 11047.497: 42.3270% ( 238) 00:31:18.264 11047.497 - 11109.912: 44.5219% ( 236) 00:31:18.264 11109.912 - 11172.328: 46.7913% ( 244) 00:31:18.264 11172.328 - 11234.743: 49.2374% ( 263) 00:31:18.264 11234.743 - 11297.158: 51.6648% ( 261) 00:31:18.264 11297.158 - 11359.573: 54.0179% ( 253) 00:31:18.264 11359.573 - 11421.989: 56.1756% ( 232) 00:31:18.264 11421.989 - 11484.404: 58.2775% ( 226) 00:31:18.264 11484.404 - 11546.819: 60.3051% ( 218) 00:31:18.264 11546.819 - 11609.234: 62.0071% ( 183) 00:31:18.264 11609.234 - 11671.650: 63.6254% ( 174) 00:31:18.264 11671.650 - 11734.065: 64.8810% ( 135) 00:31:18.264 11734.065 - 11796.480: 66.0714% ( 128) 00:31:18.264 11796.480 - 11858.895: 67.2247% ( 124) 00:31:18.264 11858.895 - 11921.310: 68.3594% ( 122) 00:31:18.264 11921.310 - 11983.726: 69.2894% ( 100) 00:31:18.264 11983.726 - 12046.141: 70.1730% ( 95) 00:31:18.264 12046.141 - 12108.556: 71.0379% ( 93) 00:31:18.264 12108.556 - 12170.971: 71.7448% ( 76) 00:31:18.264 12170.971 - 12233.387: 72.4144% ( 72) 00:31:18.264 12233.387 - 12295.802: 73.0469% ( 68) 00:31:18.264 12295.802 - 12358.217: 73.6049% ( 60) 00:31:18.264 12358.217 - 12420.632: 74.1815% ( 62) 00:31:18.264 12420.632 - 12483.048: 74.8512% ( 72) 00:31:18.264 12483.048 - 12545.463: 75.5115% ( 71) 00:31:18.264 12545.463 - 12607.878: 76.2463% ( 79) 00:31:18.264 12607.878 - 12670.293: 76.9531% ( 76) 00:31:18.264 12670.293 - 12732.709: 77.6414% ( 74) 00:31:18.264 12732.709 - 12795.124: 78.2459% ( 65) 00:31:18.264 12795.124 - 12857.539: 78.8318% ( 63) 00:31:18.264 12857.539 - 12919.954: 79.3062% ( 51) 00:31:18.264 12919.954 - 12982.370: 79.8456% ( 58) 00:31:18.264 12982.370 - 13044.785: 80.3292% ( 52) 00:31:18.264 13044.785 - 13107.200: 80.9245% ( 64) 00:31:18.264 13107.200 - 13169.615: 81.4081% ( 52) 00:31:18.264 13169.615 - 13232.030: 81.9661% ( 60) 00:31:18.264 13232.030 - 13294.446: 82.5056% ( 58) 00:31:18.264 13294.446 - 13356.861: 83.0822% ( 62) 00:31:18.264 13356.861 - 13419.276: 83.6310% ( 59) 00:31:18.264 13419.276 - 13481.691: 84.2541% ( 67) 00:31:18.264 13481.691 - 13544.107: 84.8028% ( 59) 00:31:18.264 13544.107 - 13606.522: 85.3144% ( 55) 00:31:18.264 13606.522 - 13668.937: 85.7608% ( 48) 00:31:18.264 13668.937 - 13731.352: 86.1886% ( 46) 00:31:18.264 13731.352 - 13793.768: 86.5792% ( 42) 00:31:18.264 13793.768 - 13856.183: 86.9141% ( 36) 00:31:18.264 13856.183 - 13918.598: 87.2396% ( 35) 00:31:18.264 13918.598 - 13981.013: 87.6023% ( 39) 00:31:18.264 13981.013 - 14043.429: 87.9092% ( 33) 00:31:18.264 14043.429 - 14105.844: 88.1975% ( 31) 00:31:18.264 14105.844 - 14168.259: 88.4487% ( 27) 00:31:18.264 14168.259 - 14230.674: 88.6812% ( 25) 00:31:18.264 14230.674 - 14293.090: 88.9230% ( 26) 00:31:18.264 14293.090 - 14355.505: 89.1276% ( 22) 00:31:18.264 14355.505 - 14417.920: 89.3415% ( 23) 00:31:18.264 14417.920 - 14480.335: 89.5461% ( 22) 00:31:18.264 14480.335 - 14542.750: 89.7507% ( 22) 00:31:18.264 14542.750 - 14605.166: 89.9554% ( 22) 00:31:18.264 14605.166 - 14667.581: 90.1507% ( 21) 00:31:18.264 14667.581 - 14729.996: 90.3646% ( 23) 00:31:18.264 14729.996 - 14792.411: 90.5692% ( 22) 00:31:18.264 14792.411 - 14854.827: 90.7552% ( 20) 00:31:18.264 14854.827 - 14917.242: 90.9505% ( 21) 00:31:18.264 14917.242 - 14979.657: 91.1458% ( 21) 00:31:18.264 14979.657 - 15042.072: 91.3039% ( 17) 00:31:18.264 15042.072 - 15104.488: 91.4621% ( 17) 00:31:18.264 15104.488 - 15166.903: 91.6202% ( 17) 00:31:18.264 15166.903 - 15229.318: 91.7411% ( 13) 00:31:18.264 15229.318 - 15291.733: 91.8434% ( 11) 00:31:18.264 15291.733 - 15354.149: 91.9550% ( 12) 00:31:18.264 15354.149 - 15416.564: 92.1038% ( 16) 00:31:18.264 15416.564 - 15478.979: 92.2619% ( 17) 00:31:18.264 15478.979 - 15541.394: 92.4014% ( 15) 00:31:18.264 15541.394 - 15603.810: 92.5502% ( 16) 00:31:18.264 15603.810 - 15666.225: 92.6804% ( 14) 00:31:18.264 15666.225 - 15728.640: 92.8292% ( 16) 00:31:18.264 15728.640 - 15791.055: 92.9594% ( 14) 00:31:18.264 15791.055 - 15853.470: 93.0711% ( 12) 00:31:18.264 15853.470 - 15915.886: 93.1827% ( 12) 00:31:18.264 15915.886 - 15978.301: 93.2943% ( 12) 00:31:18.264 15978.301 - 16103.131: 93.4803% ( 20) 00:31:18.264 16103.131 - 16227.962: 93.7221% ( 26) 00:31:18.264 16227.962 - 16352.792: 93.9639% ( 26) 00:31:18.264 16352.792 - 16477.623: 94.1592% ( 21) 00:31:18.264 16477.623 - 16602.453: 94.3452% ( 20) 00:31:18.264 16602.453 - 16727.284: 94.5592% ( 23) 00:31:18.264 16727.284 - 16852.114: 94.7452% ( 20) 00:31:18.264 16852.114 - 16976.945: 94.9963% ( 27) 00:31:18.264 16976.945 - 17101.775: 95.3032% ( 33) 00:31:18.264 17101.775 - 17226.606: 95.5636% ( 28) 00:31:18.264 17226.606 - 17351.436: 95.8054% ( 26) 00:31:18.264 17351.436 - 17476.267: 96.0658% ( 28) 00:31:18.264 17476.267 - 17601.097: 96.2798% ( 23) 00:31:18.264 17601.097 - 17725.928: 96.5030% ( 24) 00:31:18.264 17725.928 - 17850.758: 96.6983% ( 21) 00:31:18.264 17850.758 - 17975.589: 96.8936% ( 21) 00:31:18.264 17975.589 - 18100.419: 97.0331% ( 15) 00:31:18.264 18100.419 - 18225.250: 97.1168% ( 9) 00:31:18.264 18225.250 - 18350.080: 97.1819% ( 7) 00:31:18.264 18350.080 - 18474.910: 97.3307% ( 16) 00:31:18.264 18474.910 - 18599.741: 97.4795% ( 16) 00:31:18.264 18599.741 - 18724.571: 97.5911% ( 12) 00:31:18.264 18724.571 - 18849.402: 97.7028% ( 12) 00:31:18.264 18849.402 - 18974.232: 97.8051% ( 11) 00:31:18.264 18974.232 - 19099.063: 97.8981% ( 10) 00:31:18.264 19099.063 - 19223.893: 98.0097% ( 12) 00:31:18.264 19223.893 - 19348.724: 98.1399% ( 14) 00:31:18.264 19348.724 - 19473.554: 98.1957% ( 6) 00:31:18.264 19473.554 - 19598.385: 98.2701% ( 8) 00:31:18.264 19598.385 - 19723.215: 98.3073% ( 4) 00:31:18.264 19723.215 - 19848.046: 98.3259% ( 2) 00:31:18.264 19848.046 - 19972.876: 98.3538% ( 3) 00:31:18.264 19972.876 - 20097.707: 98.3724% ( 2) 00:31:18.264 20097.707 - 20222.537: 98.3910% ( 2) 00:31:18.264 20222.537 - 20347.368: 98.4189% ( 3) 00:31:18.264 20347.368 - 20472.198: 98.4375% ( 2) 00:31:18.264 20472.198 - 20597.029: 98.4654% ( 3) 00:31:18.264 20597.029 - 20721.859: 98.4933% ( 3) 00:31:18.264 20721.859 - 20846.690: 98.5119% ( 2) 00:31:18.264 20846.690 - 20971.520: 98.5398% ( 3) 00:31:18.264 20971.520 - 21096.350: 98.5677% ( 3) 00:31:18.264 21096.350 - 21221.181: 98.5956% ( 3) 00:31:18.264 21221.181 - 21346.011: 98.6235% ( 3) 00:31:18.264 21346.011 - 21470.842: 98.6514% ( 3) 00:31:18.264 21470.842 - 21595.672: 98.6700% ( 2) 00:31:18.264 21595.672 - 21720.503: 98.6979% ( 3) 00:31:18.264 21720.503 - 21845.333: 98.7258% ( 3) 00:31:18.264 21845.333 - 21970.164: 98.7444% ( 2) 00:31:18.264 21970.164 - 22094.994: 98.7723% ( 3) 00:31:18.264 22094.994 - 22219.825: 98.8002% ( 3) 00:31:18.264 22219.825 - 22344.655: 98.8095% ( 1) 00:31:18.264 33454.568 - 33704.229: 98.8374% ( 3) 00:31:18.264 33704.229 - 33953.890: 98.8932% ( 6) 00:31:18.264 33953.890 - 34203.550: 98.9397% ( 5) 00:31:18.264 34203.550 - 34453.211: 98.9955% ( 6) 00:31:18.264 34453.211 - 34702.872: 99.0513% ( 6) 00:31:18.264 34702.872 - 34952.533: 99.0885% ( 4) 00:31:18.265 34952.533 - 35202.194: 99.1443% ( 6) 00:31:18.265 35202.194 - 35451.855: 99.2001% ( 6) 00:31:18.265 35451.855 - 35701.516: 99.2374% ( 4) 00:31:18.265 35701.516 - 35951.177: 99.2932% ( 6) 00:31:18.265 35951.177 - 36200.838: 99.3490% ( 6) 00:31:18.265 36200.838 - 36450.499: 99.4048% ( 6) 00:31:18.265 42941.684 - 43191.345: 99.4327% ( 3) 00:31:18.265 43191.345 - 43441.006: 99.4885% ( 6) 00:31:18.265 43441.006 - 43690.667: 99.5536% ( 7) 00:31:18.265 43690.667 - 43940.328: 99.6001% ( 5) 00:31:18.265 43940.328 - 44189.989: 99.6559% ( 6) 00:31:18.265 44189.989 - 44439.650: 99.7117% ( 6) 00:31:18.265 44439.650 - 44689.310: 99.7675% ( 6) 00:31:18.265 44689.310 - 44938.971: 99.8233% ( 6) 00:31:18.265 44938.971 - 45188.632: 99.8884% ( 7) 00:31:18.265 45188.632 - 45438.293: 99.9442% ( 6) 00:31:18.265 45438.293 - 45687.954: 100.0000% ( 6) 00:31:18.265 00:31:18.265 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:31:18.265 ============================================================================== 00:31:18.265 Range in us Cumulative IO count 00:31:18.265 8488.472 - 8550.888: 0.0279% ( 3) 00:31:18.265 8550.888 - 8613.303: 0.1395% ( 12) 00:31:18.265 8613.303 - 8675.718: 0.2790% ( 15) 00:31:18.265 8675.718 - 8738.133: 0.4929% ( 23) 00:31:18.265 8738.133 - 8800.549: 0.7812% ( 31) 00:31:18.265 8800.549 - 8862.964: 1.2091% ( 46) 00:31:18.265 8862.964 - 8925.379: 1.6555% ( 48) 00:31:18.265 8925.379 - 8987.794: 2.2414% ( 63) 00:31:18.265 8987.794 - 9050.210: 2.9390% ( 75) 00:31:18.265 9050.210 - 9112.625: 3.6272% ( 74) 00:31:18.265 9112.625 - 9175.040: 4.4922% ( 93) 00:31:18.265 9175.040 - 9237.455: 5.3106% ( 88) 00:31:18.265 9237.455 - 9299.870: 6.2779% ( 104) 00:31:18.265 9299.870 - 9362.286: 7.3196% ( 112) 00:31:18.265 9362.286 - 9424.701: 8.4449% ( 121) 00:31:18.265 9424.701 - 9487.116: 9.6075% ( 125) 00:31:18.265 9487.116 - 9549.531: 10.8910% ( 138) 00:31:18.265 9549.531 - 9611.947: 12.1652% ( 137) 00:31:18.265 9611.947 - 9674.362: 13.5045% ( 144) 00:31:18.265 9674.362 - 9736.777: 14.7879% ( 138) 00:31:18.265 9736.777 - 9799.192: 16.0528% ( 136) 00:31:18.265 9799.192 - 9861.608: 17.3270% ( 137) 00:31:18.265 9861.608 - 9924.023: 18.5640% ( 133) 00:31:18.265 9924.023 - 9986.438: 19.6894% ( 121) 00:31:18.265 9986.438 - 10048.853: 20.7124% ( 110) 00:31:18.265 10048.853 - 10111.269: 21.6332% ( 99) 00:31:18.265 10111.269 - 10173.684: 22.5446% ( 98) 00:31:18.265 10173.684 - 10236.099: 23.4375% ( 96) 00:31:18.265 10236.099 - 10298.514: 24.3304% ( 96) 00:31:18.265 10298.514 - 10360.930: 25.0651% ( 79) 00:31:18.265 10360.930 - 10423.345: 25.8092% ( 80) 00:31:18.265 10423.345 - 10485.760: 26.7671% ( 103) 00:31:18.265 10485.760 - 10548.175: 27.9390% ( 126) 00:31:18.265 10548.175 - 10610.590: 29.1574% ( 131) 00:31:18.265 10610.590 - 10673.006: 30.4594% ( 140) 00:31:18.265 10673.006 - 10735.421: 31.9475% ( 160) 00:31:18.265 10735.421 - 10797.836: 33.5751% ( 175) 00:31:18.265 10797.836 - 10860.251: 35.4818% ( 205) 00:31:18.265 10860.251 - 10922.667: 37.5558% ( 223) 00:31:18.265 10922.667 - 10985.082: 39.6856% ( 229) 00:31:18.265 10985.082 - 11047.497: 41.8806% ( 236) 00:31:18.265 11047.497 - 11109.912: 44.0569% ( 234) 00:31:18.265 11109.912 - 11172.328: 46.3170% ( 243) 00:31:18.265 11172.328 - 11234.743: 48.6607% ( 252) 00:31:18.265 11234.743 - 11297.158: 50.9859% ( 250) 00:31:18.265 11297.158 - 11359.573: 53.3575% ( 255) 00:31:18.265 11359.573 - 11421.989: 55.5432% ( 235) 00:31:18.265 11421.989 - 11484.404: 57.5056% ( 211) 00:31:18.265 11484.404 - 11546.819: 59.3750% ( 201) 00:31:18.265 11546.819 - 11609.234: 61.1607% ( 192) 00:31:18.265 11609.234 - 11671.650: 62.6767% ( 163) 00:31:18.265 11671.650 - 11734.065: 64.1555% ( 159) 00:31:18.265 11734.065 - 11796.480: 65.4576% ( 140) 00:31:18.265 11796.480 - 11858.895: 66.5458% ( 117) 00:31:18.265 11858.895 - 11921.310: 67.5409% ( 107) 00:31:18.265 11921.310 - 11983.726: 68.4803% ( 101) 00:31:18.265 11983.726 - 12046.141: 69.3545% ( 94) 00:31:18.265 12046.141 - 12108.556: 70.2102% ( 92) 00:31:18.265 12108.556 - 12170.971: 70.9356% ( 78) 00:31:18.265 12170.971 - 12233.387: 71.6332% ( 75) 00:31:18.265 12233.387 - 12295.802: 72.2470% ( 66) 00:31:18.265 12295.802 - 12358.217: 72.8423% ( 64) 00:31:18.265 12358.217 - 12420.632: 73.4189% ( 62) 00:31:18.265 12420.632 - 12483.048: 74.0048% ( 63) 00:31:18.265 12483.048 - 12545.463: 74.6373% ( 68) 00:31:18.265 12545.463 - 12607.878: 75.2790% ( 69) 00:31:18.265 12607.878 - 12670.293: 75.9301% ( 70) 00:31:18.265 12670.293 - 12732.709: 76.6648% ( 79) 00:31:18.265 12732.709 - 12795.124: 77.3531% ( 74) 00:31:18.265 12795.124 - 12857.539: 78.0785% ( 78) 00:31:18.265 12857.539 - 12919.954: 78.7481% ( 72) 00:31:18.265 12919.954 - 12982.370: 79.4736% ( 78) 00:31:18.265 12982.370 - 13044.785: 80.1339% ( 71) 00:31:18.265 13044.785 - 13107.200: 80.8315% ( 75) 00:31:18.265 13107.200 - 13169.615: 81.5383% ( 76) 00:31:18.265 13169.615 - 13232.030: 82.2266% ( 74) 00:31:18.265 13232.030 - 13294.446: 82.8962% ( 72) 00:31:18.265 13294.446 - 13356.861: 83.5658% ( 72) 00:31:18.265 13356.861 - 13419.276: 84.2541% ( 74) 00:31:18.265 13419.276 - 13481.691: 84.9144% ( 71) 00:31:18.265 13481.691 - 13544.107: 85.4353% ( 56) 00:31:18.265 13544.107 - 13606.522: 85.9654% ( 57) 00:31:18.265 13606.522 - 13668.937: 86.5048% ( 58) 00:31:18.265 13668.937 - 13731.352: 86.9513% ( 48) 00:31:18.265 13731.352 - 13793.768: 87.3233% ( 40) 00:31:18.265 13793.768 - 13856.183: 87.5930% ( 29) 00:31:18.265 13856.183 - 13918.598: 87.8069% ( 23) 00:31:18.265 13918.598 - 13981.013: 88.0022% ( 21) 00:31:18.265 13981.013 - 14043.429: 88.1603% ( 17) 00:31:18.265 14043.429 - 14105.844: 88.2999% ( 15) 00:31:18.265 14105.844 - 14168.259: 88.5789% ( 30) 00:31:18.265 14168.259 - 14230.674: 88.8486% ( 29) 00:31:18.265 14230.674 - 14293.090: 89.0904% ( 26) 00:31:18.265 14293.090 - 14355.505: 89.3229% ( 25) 00:31:18.265 14355.505 - 14417.920: 89.5926% ( 29) 00:31:18.265 14417.920 - 14480.335: 89.8438% ( 27) 00:31:18.265 14480.335 - 14542.750: 90.1135% ( 29) 00:31:18.265 14542.750 - 14605.166: 90.3646% ( 27) 00:31:18.265 14605.166 - 14667.581: 90.6064% ( 26) 00:31:18.265 14667.581 - 14729.996: 90.8761% ( 29) 00:31:18.265 14729.996 - 14792.411: 91.0807% ( 22) 00:31:18.265 14792.411 - 14854.827: 91.2853% ( 22) 00:31:18.265 14854.827 - 14917.242: 91.4528% ( 18) 00:31:18.265 14917.242 - 14979.657: 91.6388% ( 20) 00:31:18.265 14979.657 - 15042.072: 91.8341% ( 21) 00:31:18.265 15042.072 - 15104.488: 92.0015% ( 18) 00:31:18.265 15104.488 - 15166.903: 92.1782% ( 19) 00:31:18.265 15166.903 - 15229.318: 92.2991% ( 13) 00:31:18.265 15229.318 - 15291.733: 92.3921% ( 10) 00:31:18.265 15291.733 - 15354.149: 92.4851% ( 10) 00:31:18.265 15354.149 - 15416.564: 92.5595% ( 8) 00:31:18.265 15416.564 - 15478.979: 92.6525% ( 10) 00:31:18.265 15478.979 - 15541.394: 92.7362% ( 9) 00:31:18.265 15541.394 - 15603.810: 92.8664% ( 14) 00:31:18.265 15603.810 - 15666.225: 92.9781% ( 12) 00:31:18.265 15666.225 - 15728.640: 93.1362% ( 17) 00:31:18.265 15728.640 - 15791.055: 93.2850% ( 16) 00:31:18.265 15791.055 - 15853.470: 93.4524% ( 18) 00:31:18.265 15853.470 - 15915.886: 93.6570% ( 22) 00:31:18.265 15915.886 - 15978.301: 93.8616% ( 22) 00:31:18.265 15978.301 - 16103.131: 94.2522% ( 42) 00:31:18.265 16103.131 - 16227.962: 94.6615% ( 44) 00:31:18.265 16227.962 - 16352.792: 95.0056% ( 37) 00:31:18.265 16352.792 - 16477.623: 95.3497% ( 37) 00:31:18.265 16477.623 - 16602.453: 95.5729% ( 24) 00:31:18.265 16602.453 - 16727.284: 95.7403% ( 18) 00:31:18.265 16727.284 - 16852.114: 95.8147% ( 8) 00:31:18.265 16852.114 - 16976.945: 95.8891% ( 8) 00:31:18.265 16976.945 - 17101.775: 96.0938% ( 22) 00:31:18.265 17101.775 - 17226.606: 96.2798% ( 20) 00:31:18.265 17226.606 - 17351.436: 96.4751% ( 21) 00:31:18.265 17351.436 - 17476.267: 96.6797% ( 22) 00:31:18.265 17476.267 - 17601.097: 96.8936% ( 23) 00:31:18.265 17601.097 - 17725.928: 97.1168% ( 24) 00:31:18.265 17725.928 - 17850.758: 97.3493% ( 25) 00:31:18.265 17850.758 - 17975.589: 97.5725% ( 24) 00:31:18.265 17975.589 - 18100.419: 97.7307% ( 17) 00:31:18.265 18100.419 - 18225.250: 97.8144% ( 9) 00:31:18.265 18225.250 - 18350.080: 97.8516% ( 4) 00:31:18.265 18350.080 - 18474.910: 97.8888% ( 4) 00:31:18.265 18474.910 - 18599.741: 97.9260% ( 4) 00:31:18.265 18599.741 - 18724.571: 97.9632% ( 4) 00:31:18.265 18724.571 - 18849.402: 98.0004% ( 4) 00:31:18.265 18849.402 - 18974.232: 98.0283% ( 3) 00:31:18.265 18974.232 - 19099.063: 98.0934% ( 7) 00:31:18.265 19099.063 - 19223.893: 98.1399% ( 5) 00:31:18.265 19223.893 - 19348.724: 98.2050% ( 7) 00:31:18.265 19348.724 - 19473.554: 98.2794% ( 8) 00:31:18.265 19473.554 - 19598.385: 98.3259% ( 5) 00:31:18.266 19598.385 - 19723.215: 98.3724% ( 5) 00:31:18.266 19723.215 - 19848.046: 98.4003% ( 3) 00:31:18.266 19848.046 - 19972.876: 98.4282% ( 3) 00:31:18.266 19972.876 - 20097.707: 98.4561% ( 3) 00:31:18.266 20097.707 - 20222.537: 98.4747% ( 2) 00:31:18.266 20222.537 - 20347.368: 98.5026% ( 3) 00:31:18.266 20347.368 - 20472.198: 98.5305% ( 3) 00:31:18.266 20472.198 - 20597.029: 98.5491% ( 2) 00:31:18.266 20597.029 - 20721.859: 98.5770% ( 3) 00:31:18.266 20721.859 - 20846.690: 98.6049% ( 3) 00:31:18.266 20846.690 - 20971.520: 98.6235% ( 2) 00:31:18.266 20971.520 - 21096.350: 98.6514% ( 3) 00:31:18.266 21096.350 - 21221.181: 98.6700% ( 2) 00:31:18.266 21221.181 - 21346.011: 98.6979% ( 3) 00:31:18.266 21346.011 - 21470.842: 98.7165% ( 2) 00:31:18.266 21470.842 - 21595.672: 98.7258% ( 1) 00:31:18.266 21595.672 - 21720.503: 98.7537% ( 3) 00:31:18.266 21720.503 - 21845.333: 98.7816% ( 3) 00:31:18.266 21845.333 - 21970.164: 98.8002% ( 2) 00:31:18.266 21970.164 - 22094.994: 98.8095% ( 1) 00:31:18.266 31706.941 - 31831.771: 98.8281% ( 2) 00:31:18.266 31831.771 - 31956.602: 98.8467% ( 2) 00:31:18.266 31956.602 - 32206.263: 98.9025% ( 6) 00:31:18.266 32206.263 - 32455.924: 98.9676% ( 7) 00:31:18.266 32455.924 - 32705.585: 99.0141% ( 5) 00:31:18.266 32705.585 - 32955.246: 99.0792% ( 7) 00:31:18.266 32955.246 - 33204.907: 99.1350% ( 6) 00:31:18.266 33204.907 - 33454.568: 99.1815% ( 5) 00:31:18.266 33454.568 - 33704.229: 99.2374% ( 6) 00:31:18.266 33704.229 - 33953.890: 99.2839% ( 5) 00:31:18.266 33953.890 - 34203.550: 99.3304% ( 5) 00:31:18.266 34203.550 - 34453.211: 99.3862% ( 6) 00:31:18.266 34453.211 - 34702.872: 99.4048% ( 2) 00:31:18.266 40445.074 - 40694.735: 99.4420% ( 4) 00:31:18.266 40694.735 - 40944.396: 99.4885% ( 5) 00:31:18.266 40944.396 - 41194.057: 99.5443% ( 6) 00:31:18.266 41194.057 - 41443.718: 99.6001% ( 6) 00:31:18.266 41443.718 - 41693.379: 99.6559% ( 6) 00:31:18.266 41693.379 - 41943.040: 99.7117% ( 6) 00:31:18.266 41943.040 - 42192.701: 99.7675% ( 6) 00:31:18.266 42192.701 - 42442.362: 99.8233% ( 6) 00:31:18.266 42442.362 - 42692.023: 99.8791% ( 6) 00:31:18.526 42692.023 - 42941.684: 99.9349% ( 6) 00:31:18.526 42941.684 - 43191.345: 99.9907% ( 6) 00:31:18.526 43191.345 - 43441.006: 100.0000% ( 1) 00:31:18.526 00:31:18.526 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:31:18.526 ============================================================================== 00:31:18.526 Range in us Cumulative IO count 00:31:18.526 8488.472 - 8550.888: 0.0744% ( 8) 00:31:18.526 8550.888 - 8613.303: 0.1581% ( 9) 00:31:18.526 8613.303 - 8675.718: 0.2790% ( 13) 00:31:18.526 8675.718 - 8738.133: 0.5301% ( 27) 00:31:18.526 8738.133 - 8800.549: 0.8464% ( 34) 00:31:18.526 8800.549 - 8862.964: 1.2184% ( 40) 00:31:18.526 8862.964 - 8925.379: 1.7206% ( 54) 00:31:18.526 8925.379 - 8987.794: 2.3251% ( 65) 00:31:18.526 8987.794 - 9050.210: 3.0227% ( 75) 00:31:18.526 9050.210 - 9112.625: 3.7202% ( 75) 00:31:18.526 9112.625 - 9175.040: 4.5387% ( 88) 00:31:18.526 9175.040 - 9237.455: 5.3943% ( 92) 00:31:18.526 9237.455 - 9299.870: 6.3430% ( 102) 00:31:18.526 9299.870 - 9362.286: 7.4591% ( 120) 00:31:18.526 9362.286 - 9424.701: 8.5565% ( 118) 00:31:18.526 9424.701 - 9487.116: 9.7656% ( 130) 00:31:18.526 9487.116 - 9549.531: 11.0584% ( 139) 00:31:18.526 9549.531 - 9611.947: 12.3140% ( 135) 00:31:18.526 9611.947 - 9674.362: 13.7091% ( 150) 00:31:18.526 9674.362 - 9736.777: 14.9554% ( 134) 00:31:18.526 9736.777 - 9799.192: 16.2946% ( 144) 00:31:18.526 9799.192 - 9861.608: 17.5967% ( 140) 00:31:18.526 9861.608 - 9924.023: 18.7965% ( 129) 00:31:18.526 9924.023 - 9986.438: 19.9684% ( 126) 00:31:18.526 9986.438 - 10048.853: 20.9542% ( 106) 00:31:18.526 10048.853 - 10111.269: 21.9215% ( 104) 00:31:18.526 10111.269 - 10173.684: 22.7493% ( 89) 00:31:18.526 10173.684 - 10236.099: 23.6049% ( 92) 00:31:18.526 10236.099 - 10298.514: 24.4699% ( 93) 00:31:18.526 10298.514 - 10360.930: 25.2418% ( 83) 00:31:18.526 10360.930 - 10423.345: 26.0789% ( 90) 00:31:18.526 10423.345 - 10485.760: 27.0368% ( 103) 00:31:18.526 10485.760 - 10548.175: 28.0692% ( 111) 00:31:18.526 10548.175 - 10610.590: 29.1295% ( 114) 00:31:18.526 10610.590 - 10673.006: 30.2920% ( 125) 00:31:18.526 10673.006 - 10735.421: 31.6871% ( 150) 00:31:18.526 10735.421 - 10797.836: 33.3705% ( 181) 00:31:18.526 10797.836 - 10860.251: 35.2400% ( 201) 00:31:18.526 10860.251 - 10922.667: 37.2303% ( 214) 00:31:18.526 10922.667 - 10985.082: 39.2392% ( 216) 00:31:18.526 10985.082 - 11047.497: 41.2853% ( 220) 00:31:18.526 11047.497 - 11109.912: 43.3687% ( 224) 00:31:18.526 11109.912 - 11172.328: 45.5171% ( 231) 00:31:18.526 11172.328 - 11234.743: 47.9074% ( 257) 00:31:18.526 11234.743 - 11297.158: 50.2511% ( 252) 00:31:18.526 11297.158 - 11359.573: 52.4647% ( 238) 00:31:18.526 11359.573 - 11421.989: 54.6038% ( 230) 00:31:18.526 11421.989 - 11484.404: 56.6220% ( 217) 00:31:18.526 11484.404 - 11546.819: 58.5286% ( 205) 00:31:18.526 11546.819 - 11609.234: 60.3423% ( 195) 00:31:18.526 11609.234 - 11671.650: 61.9699% ( 175) 00:31:18.526 11671.650 - 11734.065: 63.4022% ( 154) 00:31:18.526 11734.065 - 11796.480: 64.6205% ( 131) 00:31:18.526 11796.480 - 11858.895: 65.7645% ( 123) 00:31:18.526 11858.895 - 11921.310: 66.9271% ( 125) 00:31:18.526 11921.310 - 11983.726: 67.8850% ( 103) 00:31:18.526 11983.726 - 12046.141: 68.8244% ( 101) 00:31:18.526 12046.141 - 12108.556: 69.6708% ( 91) 00:31:18.526 12108.556 - 12170.971: 70.3683% ( 75) 00:31:18.526 12170.971 - 12233.387: 70.9635% ( 64) 00:31:18.526 12233.387 - 12295.802: 71.5867% ( 67) 00:31:18.526 12295.802 - 12358.217: 72.3214% ( 79) 00:31:18.526 12358.217 - 12420.632: 72.9911% ( 72) 00:31:18.526 12420.632 - 12483.048: 73.6886% ( 75) 00:31:18.526 12483.048 - 12545.463: 74.3676% ( 73) 00:31:18.526 12545.463 - 12607.878: 75.0186% ( 70) 00:31:18.526 12607.878 - 12670.293: 75.6882% ( 72) 00:31:18.526 12670.293 - 12732.709: 76.4230% ( 79) 00:31:18.526 12732.709 - 12795.124: 77.1577% ( 79) 00:31:18.526 12795.124 - 12857.539: 77.8088% ( 70) 00:31:18.526 12857.539 - 12919.954: 78.4040% ( 64) 00:31:18.526 12919.954 - 12982.370: 79.0458% ( 69) 00:31:18.526 12982.370 - 13044.785: 79.6224% ( 62) 00:31:18.526 13044.785 - 13107.200: 80.3013% ( 73) 00:31:18.526 13107.200 - 13169.615: 80.9803% ( 73) 00:31:18.526 13169.615 - 13232.030: 81.6685% ( 74) 00:31:18.526 13232.030 - 13294.446: 82.3289% ( 71) 00:31:18.526 13294.446 - 13356.861: 82.9706% ( 69) 00:31:18.526 13356.861 - 13419.276: 83.5379% ( 61) 00:31:18.526 13419.276 - 13481.691: 84.0681% ( 57) 00:31:18.526 13481.691 - 13544.107: 84.5238% ( 49) 00:31:18.526 13544.107 - 13606.522: 85.0167% ( 53) 00:31:18.526 13606.522 - 13668.937: 85.5655% ( 59) 00:31:18.526 13668.937 - 13731.352: 86.1235% ( 60) 00:31:18.526 13731.352 - 13793.768: 86.6443% ( 56) 00:31:18.526 13793.768 - 13856.183: 87.0629% ( 45) 00:31:18.526 13856.183 - 13918.598: 87.4256% ( 39) 00:31:18.526 13918.598 - 13981.013: 87.7883% ( 39) 00:31:18.526 13981.013 - 14043.429: 88.1696% ( 41) 00:31:18.526 14043.429 - 14105.844: 88.5696% ( 43) 00:31:18.526 14105.844 - 14168.259: 88.9974% ( 46) 00:31:18.526 14168.259 - 14230.674: 89.3694% ( 40) 00:31:18.526 14230.674 - 14293.090: 89.7507% ( 41) 00:31:18.526 14293.090 - 14355.505: 90.1042% ( 38) 00:31:18.526 14355.505 - 14417.920: 90.4576% ( 38) 00:31:18.526 14417.920 - 14480.335: 90.8017% ( 37) 00:31:18.526 14480.335 - 14542.750: 91.0342% ( 25) 00:31:18.526 14542.750 - 14605.166: 91.3132% ( 30) 00:31:18.526 14605.166 - 14667.581: 91.5365% ( 24) 00:31:18.526 14667.581 - 14729.996: 91.7411% ( 22) 00:31:18.526 14729.996 - 14792.411: 91.9271% ( 20) 00:31:18.526 14792.411 - 14854.827: 92.1038% ( 19) 00:31:18.526 14854.827 - 14917.242: 92.2340% ( 14) 00:31:18.526 14917.242 - 14979.657: 92.3642% ( 14) 00:31:18.526 14979.657 - 15042.072: 92.4851% ( 13) 00:31:18.526 15042.072 - 15104.488: 92.5688% ( 9) 00:31:18.526 15104.488 - 15166.903: 92.6525% ( 9) 00:31:18.526 15166.903 - 15229.318: 92.7176% ( 7) 00:31:18.526 15229.318 - 15291.733: 92.7641% ( 5) 00:31:18.527 15291.733 - 15354.149: 92.8199% ( 6) 00:31:18.527 15354.149 - 15416.564: 92.8664% ( 5) 00:31:18.527 15416.564 - 15478.979: 92.9501% ( 9) 00:31:18.527 15478.979 - 15541.394: 93.0339% ( 9) 00:31:18.527 15541.394 - 15603.810: 93.1083% ( 8) 00:31:18.527 15603.810 - 15666.225: 93.2013% ( 10) 00:31:18.527 15666.225 - 15728.640: 93.2943% ( 10) 00:31:18.527 15728.640 - 15791.055: 93.3687% ( 8) 00:31:18.527 15791.055 - 15853.470: 93.4710% ( 11) 00:31:18.527 15853.470 - 15915.886: 93.5640% ( 10) 00:31:18.527 15915.886 - 15978.301: 93.7221% ( 17) 00:31:18.527 15978.301 - 16103.131: 94.0383% ( 34) 00:31:18.527 16103.131 - 16227.962: 94.4010% ( 39) 00:31:18.527 16227.962 - 16352.792: 94.7824% ( 41) 00:31:18.527 16352.792 - 16477.623: 95.1730% ( 42) 00:31:18.527 16477.623 - 16602.453: 95.5264% ( 38) 00:31:18.527 16602.453 - 16727.284: 95.8426% ( 34) 00:31:18.527 16727.284 - 16852.114: 96.1682% ( 35) 00:31:18.527 16852.114 - 16976.945: 96.4751% ( 33) 00:31:18.527 16976.945 - 17101.775: 96.6890% ( 23) 00:31:18.527 17101.775 - 17226.606: 96.8192% ( 14) 00:31:18.527 17226.606 - 17351.436: 96.9401% ( 13) 00:31:18.527 17351.436 - 17476.267: 97.0610% ( 13) 00:31:18.527 17476.267 - 17601.097: 97.1912% ( 14) 00:31:18.527 17601.097 - 17725.928: 97.3121% ( 13) 00:31:18.527 17725.928 - 17850.758: 97.4423% ( 14) 00:31:18.527 17850.758 - 17975.589: 97.5632% ( 13) 00:31:18.527 17975.589 - 18100.419: 97.6376% ( 8) 00:31:18.527 18100.419 - 18225.250: 97.6749% ( 4) 00:31:18.527 18225.250 - 18350.080: 97.7121% ( 4) 00:31:18.527 18350.080 - 18474.910: 97.7400% ( 3) 00:31:18.527 18474.910 - 18599.741: 97.7772% ( 4) 00:31:18.527 18599.741 - 18724.571: 97.8051% ( 3) 00:31:18.527 18724.571 - 18849.402: 97.8423% ( 4) 00:31:18.527 18849.402 - 18974.232: 97.8795% ( 4) 00:31:18.527 18974.232 - 19099.063: 97.9167% ( 4) 00:31:18.527 19099.063 - 19223.893: 97.9446% ( 3) 00:31:18.527 19223.893 - 19348.724: 98.0004% ( 6) 00:31:18.527 19348.724 - 19473.554: 98.0655% ( 7) 00:31:18.527 19473.554 - 19598.385: 98.1213% ( 6) 00:31:18.527 19598.385 - 19723.215: 98.1957% ( 8) 00:31:18.527 19723.215 - 19848.046: 98.2422% ( 5) 00:31:18.527 19848.046 - 19972.876: 98.2980% ( 6) 00:31:18.527 19972.876 - 20097.707: 98.3538% ( 6) 00:31:18.527 20097.707 - 20222.537: 98.4189% ( 7) 00:31:18.527 20222.537 - 20347.368: 98.4375% ( 2) 00:31:18.527 20347.368 - 20472.198: 98.4654% ( 3) 00:31:18.527 20472.198 - 20597.029: 98.4840% ( 2) 00:31:18.527 20597.029 - 20721.859: 98.5119% ( 3) 00:31:18.527 20721.859 - 20846.690: 98.5305% ( 2) 00:31:18.527 20846.690 - 20971.520: 98.5584% ( 3) 00:31:18.527 20971.520 - 21096.350: 98.5863% ( 3) 00:31:18.527 21096.350 - 21221.181: 98.6142% ( 3) 00:31:18.527 21221.181 - 21346.011: 98.6328% ( 2) 00:31:18.527 21346.011 - 21470.842: 98.6514% ( 2) 00:31:18.527 21470.842 - 21595.672: 98.6793% ( 3) 00:31:18.527 21595.672 - 21720.503: 98.6979% ( 2) 00:31:18.527 21720.503 - 21845.333: 98.7258% ( 3) 00:31:18.527 21845.333 - 21970.164: 98.7444% ( 2) 00:31:18.527 21970.164 - 22094.994: 98.7723% ( 3) 00:31:18.527 22094.994 - 22219.825: 98.8002% ( 3) 00:31:18.527 22219.825 - 22344.655: 98.8095% ( 1) 00:31:18.527 28461.349 - 28586.179: 98.8188% ( 1) 00:31:18.527 28586.179 - 28711.010: 98.8374% ( 2) 00:31:18.527 28711.010 - 28835.840: 98.8746% ( 4) 00:31:18.527 28835.840 - 28960.670: 98.9025% ( 3) 00:31:18.527 28960.670 - 29085.501: 98.9211% ( 2) 00:31:18.527 29085.501 - 29210.331: 98.9397% ( 2) 00:31:18.527 29210.331 - 29335.162: 98.9676% ( 3) 00:31:18.527 29335.162 - 29459.992: 98.9955% ( 3) 00:31:18.527 29459.992 - 29584.823: 99.0234% ( 3) 00:31:18.527 29584.823 - 29709.653: 99.0513% ( 3) 00:31:18.527 29709.653 - 29834.484: 99.0792% ( 3) 00:31:18.527 29834.484 - 29959.314: 99.1071% ( 3) 00:31:18.527 29959.314 - 30084.145: 99.1257% ( 2) 00:31:18.527 30084.145 - 30208.975: 99.1629% ( 4) 00:31:18.527 30208.975 - 30333.806: 99.1815% ( 2) 00:31:18.527 30333.806 - 30458.636: 99.2094% ( 3) 00:31:18.527 30458.636 - 30583.467: 99.2281% ( 2) 00:31:18.527 30583.467 - 30708.297: 99.2560% ( 3) 00:31:18.527 30708.297 - 30833.128: 99.2839% ( 3) 00:31:18.527 30833.128 - 30957.958: 99.3118% ( 3) 00:31:18.527 30957.958 - 31082.789: 99.3397% ( 3) 00:31:18.527 31082.789 - 31207.619: 99.3583% ( 2) 00:31:18.527 31207.619 - 31332.450: 99.3862% ( 3) 00:31:18.527 31332.450 - 31457.280: 99.4048% ( 2) 00:31:18.527 37199.482 - 37449.143: 99.4234% ( 2) 00:31:18.527 37449.143 - 37698.804: 99.4792% ( 6) 00:31:18.527 37698.804 - 37948.465: 99.5350% ( 6) 00:31:18.527 37948.465 - 38198.126: 99.5908% ( 6) 00:31:18.527 38198.126 - 38447.787: 99.6466% ( 6) 00:31:18.527 38447.787 - 38697.448: 99.6931% ( 5) 00:31:18.527 38697.448 - 38947.109: 99.7489% ( 6) 00:31:18.527 38947.109 - 39196.770: 99.8047% ( 6) 00:31:18.527 39196.770 - 39446.430: 99.8605% ( 6) 00:31:18.527 39446.430 - 39696.091: 99.9163% ( 6) 00:31:18.527 39696.091 - 39945.752: 99.9721% ( 6) 00:31:18.527 39945.752 - 40195.413: 100.0000% ( 3) 00:31:18.527 00:31:18.527 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:31:18.527 ============================================================================== 00:31:18.527 Range in us Cumulative IO count 00:31:18.527 8426.057 - 8488.472: 0.0093% ( 1) 00:31:18.527 8488.472 - 8550.888: 0.0558% ( 5) 00:31:18.527 8550.888 - 8613.303: 0.1581% ( 11) 00:31:18.527 8613.303 - 8675.718: 0.3441% ( 20) 00:31:18.527 8675.718 - 8738.133: 0.5859% ( 26) 00:31:18.527 8738.133 - 8800.549: 0.9208% ( 36) 00:31:18.527 8800.549 - 8862.964: 1.3393% ( 45) 00:31:18.527 8862.964 - 8925.379: 1.8322% ( 53) 00:31:18.527 8925.379 - 8987.794: 2.3810% ( 59) 00:31:18.527 8987.794 - 9050.210: 2.9390% ( 60) 00:31:18.527 9050.210 - 9112.625: 3.6551% ( 77) 00:31:18.527 9112.625 - 9175.040: 4.4364% ( 84) 00:31:18.527 9175.040 - 9237.455: 5.2362% ( 86) 00:31:18.527 9237.455 - 9299.870: 6.2965% ( 114) 00:31:18.527 9299.870 - 9362.286: 7.3196% ( 110) 00:31:18.527 9362.286 - 9424.701: 8.5844% ( 136) 00:31:18.527 9424.701 - 9487.116: 9.8493% ( 136) 00:31:18.527 9487.116 - 9549.531: 11.2723% ( 153) 00:31:18.527 9549.531 - 9611.947: 12.6023% ( 143) 00:31:18.527 9611.947 - 9674.362: 13.9230% ( 142) 00:31:18.527 9674.362 - 9736.777: 15.2065% ( 138) 00:31:18.527 9736.777 - 9799.192: 16.4900% ( 138) 00:31:18.527 9799.192 - 9861.608: 17.7362% ( 134) 00:31:18.527 9861.608 - 9924.023: 18.8895% ( 124) 00:31:18.527 9924.023 - 9986.438: 20.0428% ( 124) 00:31:18.527 9986.438 - 10048.853: 21.0379% ( 107) 00:31:18.527 10048.853 - 10111.269: 22.0238% ( 106) 00:31:18.527 10111.269 - 10173.684: 23.0097% ( 106) 00:31:18.527 10173.684 - 10236.099: 24.0327% ( 110) 00:31:18.527 10236.099 - 10298.514: 24.8605% ( 89) 00:31:18.527 10298.514 - 10360.930: 25.6696% ( 87) 00:31:18.527 10360.930 - 10423.345: 26.4602% ( 85) 00:31:18.527 10423.345 - 10485.760: 27.2507% ( 85) 00:31:18.527 10485.760 - 10548.175: 28.1157% ( 93) 00:31:18.527 10548.175 - 10610.590: 29.1574% ( 112) 00:31:18.527 10610.590 - 10673.006: 30.2920% ( 122) 00:31:18.527 10673.006 - 10735.421: 31.7987% ( 162) 00:31:18.527 10735.421 - 10797.836: 33.4635% ( 179) 00:31:18.527 10797.836 - 10860.251: 35.3144% ( 199) 00:31:18.527 10860.251 - 10922.667: 37.2675% ( 210) 00:31:18.527 10922.667 - 10985.082: 39.3322% ( 222) 00:31:18.527 10985.082 - 11047.497: 41.4435% ( 227) 00:31:18.527 11047.497 - 11109.912: 43.5361% ( 225) 00:31:18.527 11109.912 - 11172.328: 45.7589% ( 239) 00:31:18.527 11172.328 - 11234.743: 48.1213% ( 254) 00:31:18.527 11234.743 - 11297.158: 50.4371% ( 249) 00:31:18.527 11297.158 - 11359.573: 52.6879% ( 242) 00:31:18.527 11359.573 - 11421.989: 54.9386% ( 242) 00:31:18.527 11421.989 - 11484.404: 56.9847% ( 220) 00:31:18.527 11484.404 - 11546.819: 58.8635% ( 202) 00:31:18.527 11546.819 - 11609.234: 60.6957% ( 197) 00:31:18.527 11609.234 - 11671.650: 62.3977% ( 183) 00:31:18.527 11671.650 - 11734.065: 63.8765% ( 159) 00:31:18.527 11734.065 - 11796.480: 65.1600% ( 138) 00:31:18.527 11796.480 - 11858.895: 66.2853% ( 121) 00:31:18.527 11858.895 - 11921.310: 67.2991% ( 109) 00:31:18.527 11921.310 - 11983.726: 68.3129% ( 109) 00:31:18.527 11983.726 - 12046.141: 69.2243% ( 98) 00:31:18.527 12046.141 - 12108.556: 70.0149% ( 85) 00:31:18.527 12108.556 - 12170.971: 70.6938% ( 73) 00:31:18.527 12170.971 - 12233.387: 71.3728% ( 73) 00:31:18.527 12233.387 - 12295.802: 71.9587% ( 63) 00:31:18.527 12295.802 - 12358.217: 72.5818% ( 67) 00:31:18.527 12358.217 - 12420.632: 73.2050% ( 67) 00:31:18.527 12420.632 - 12483.048: 73.8188% ( 66) 00:31:18.527 12483.048 - 12545.463: 74.4699% ( 70) 00:31:18.527 12545.463 - 12607.878: 75.0837% ( 66) 00:31:18.527 12607.878 - 12670.293: 75.6417% ( 60) 00:31:18.527 12670.293 - 12732.709: 76.2091% ( 61) 00:31:18.527 12732.709 - 12795.124: 76.8229% ( 66) 00:31:18.527 12795.124 - 12857.539: 77.4647% ( 69) 00:31:18.527 12857.539 - 12919.954: 78.0599% ( 64) 00:31:18.527 12919.954 - 12982.370: 78.7202% ( 71) 00:31:18.527 12982.370 - 13044.785: 79.3527% ( 68) 00:31:18.527 13044.785 - 13107.200: 80.0688% ( 77) 00:31:18.527 13107.200 - 13169.615: 80.6920% ( 67) 00:31:18.527 13169.615 - 13232.030: 81.4081% ( 77) 00:31:18.527 13232.030 - 13294.446: 82.1336% ( 78) 00:31:18.527 13294.446 - 13356.861: 82.8497% ( 77) 00:31:18.527 13356.861 - 13419.276: 83.5100% ( 71) 00:31:18.527 13419.276 - 13481.691: 84.1518% ( 69) 00:31:18.527 13481.691 - 13544.107: 84.7284% ( 62) 00:31:18.527 13544.107 - 13606.522: 85.3051% ( 62) 00:31:18.527 13606.522 - 13668.937: 85.8352% ( 57) 00:31:18.527 13668.937 - 13731.352: 86.3839% ( 59) 00:31:18.527 13731.352 - 13793.768: 86.8862% ( 54) 00:31:18.528 13793.768 - 13856.183: 87.2768% ( 42) 00:31:18.528 13856.183 - 13918.598: 87.6116% ( 36) 00:31:18.528 13918.598 - 13981.013: 87.8813% ( 29) 00:31:18.528 13981.013 - 14043.429: 88.1510% ( 29) 00:31:18.528 14043.429 - 14105.844: 88.4022% ( 27) 00:31:18.528 14105.844 - 14168.259: 88.6998% ( 32) 00:31:18.528 14168.259 - 14230.674: 88.9602% ( 28) 00:31:18.528 14230.674 - 14293.090: 89.1648% ( 22) 00:31:18.528 14293.090 - 14355.505: 89.3880% ( 24) 00:31:18.528 14355.505 - 14417.920: 89.5554% ( 18) 00:31:18.528 14417.920 - 14480.335: 89.7228% ( 18) 00:31:18.528 14480.335 - 14542.750: 89.8624% ( 15) 00:31:18.528 14542.750 - 14605.166: 90.0298% ( 18) 00:31:18.528 14605.166 - 14667.581: 90.2065% ( 19) 00:31:18.528 14667.581 - 14729.996: 90.3925% ( 20) 00:31:18.528 14729.996 - 14792.411: 90.5692% ( 19) 00:31:18.528 14792.411 - 14854.827: 90.7180% ( 16) 00:31:18.528 14854.827 - 14917.242: 90.8761% ( 17) 00:31:18.528 14917.242 - 14979.657: 91.0435% ( 18) 00:31:18.528 14979.657 - 15042.072: 91.2481% ( 22) 00:31:18.528 15042.072 - 15104.488: 91.4528% ( 22) 00:31:18.528 15104.488 - 15166.903: 91.6388% ( 20) 00:31:18.528 15166.903 - 15229.318: 91.8248% ( 20) 00:31:18.528 15229.318 - 15291.733: 91.9736% ( 16) 00:31:18.528 15291.733 - 15354.149: 92.1131% ( 15) 00:31:18.528 15354.149 - 15416.564: 92.2433% ( 14) 00:31:18.528 15416.564 - 15478.979: 92.4014% ( 17) 00:31:18.528 15478.979 - 15541.394: 92.5409% ( 15) 00:31:18.528 15541.394 - 15603.810: 92.6525% ( 12) 00:31:18.528 15603.810 - 15666.225: 92.7548% ( 11) 00:31:18.528 15666.225 - 15728.640: 92.9222% ( 18) 00:31:18.528 15728.640 - 15791.055: 93.0897% ( 18) 00:31:18.528 15791.055 - 15853.470: 93.2664% ( 19) 00:31:18.528 15853.470 - 15915.886: 93.4803% ( 23) 00:31:18.528 15915.886 - 15978.301: 93.6756% ( 21) 00:31:18.528 15978.301 - 16103.131: 94.0941% ( 45) 00:31:18.528 16103.131 - 16227.962: 94.5592% ( 50) 00:31:18.528 16227.962 - 16352.792: 94.9870% ( 46) 00:31:18.528 16352.792 - 16477.623: 95.3869% ( 43) 00:31:18.528 16477.623 - 16602.453: 95.7496% ( 39) 00:31:18.528 16602.453 - 16727.284: 96.1031% ( 38) 00:31:18.528 16727.284 - 16852.114: 96.4193% ( 34) 00:31:18.528 16852.114 - 16976.945: 96.6983% ( 30) 00:31:18.528 16976.945 - 17101.775: 96.8936% ( 21) 00:31:18.528 17101.775 - 17226.606: 97.0145% ( 13) 00:31:18.528 17226.606 - 17351.436: 97.1261% ( 12) 00:31:18.528 17351.436 - 17476.267: 97.2284% ( 11) 00:31:18.528 17476.267 - 17601.097: 97.3400% ( 12) 00:31:18.528 17601.097 - 17725.928: 97.4330% ( 10) 00:31:18.528 17725.928 - 17850.758: 97.5260% ( 10) 00:31:18.528 17850.758 - 17975.589: 97.5911% ( 7) 00:31:18.528 17975.589 - 18100.419: 97.6190% ( 3) 00:31:18.528 18350.080 - 18474.910: 97.6376% ( 2) 00:31:18.528 18474.910 - 18599.741: 97.6749% ( 4) 00:31:18.528 18599.741 - 18724.571: 97.7121% ( 4) 00:31:18.528 18724.571 - 18849.402: 97.7400% ( 3) 00:31:18.528 18849.402 - 18974.232: 97.7679% ( 3) 00:31:18.528 18974.232 - 19099.063: 97.8051% ( 4) 00:31:18.528 19099.063 - 19223.893: 97.8423% ( 4) 00:31:18.528 19223.893 - 19348.724: 97.8702% ( 3) 00:31:18.528 19348.724 - 19473.554: 97.9167% ( 5) 00:31:18.528 19473.554 - 19598.385: 97.9632% ( 5) 00:31:18.528 19598.385 - 19723.215: 98.0283% ( 7) 00:31:18.528 19723.215 - 19848.046: 98.0841% ( 6) 00:31:18.528 19848.046 - 19972.876: 98.1306% ( 5) 00:31:18.528 19972.876 - 20097.707: 98.1957% ( 7) 00:31:18.528 20097.707 - 20222.537: 98.2422% ( 5) 00:31:18.528 20222.537 - 20347.368: 98.3073% ( 7) 00:31:18.528 20347.368 - 20472.198: 98.3724% ( 7) 00:31:18.528 20472.198 - 20597.029: 98.4189% ( 5) 00:31:18.528 20597.029 - 20721.859: 98.4654% ( 5) 00:31:18.528 20721.859 - 20846.690: 98.4840% ( 2) 00:31:18.528 20846.690 - 20971.520: 98.5119% ( 3) 00:31:18.528 20971.520 - 21096.350: 98.5398% ( 3) 00:31:18.528 21096.350 - 21221.181: 98.5584% ( 2) 00:31:18.528 21221.181 - 21346.011: 98.5956% ( 4) 00:31:18.528 21346.011 - 21470.842: 98.6142% ( 2) 00:31:18.528 21470.842 - 21595.672: 98.6421% ( 3) 00:31:18.528 21595.672 - 21720.503: 98.6700% ( 3) 00:31:18.528 21720.503 - 21845.333: 98.6979% ( 3) 00:31:18.528 21845.333 - 21970.164: 98.7165% ( 2) 00:31:18.528 21970.164 - 22094.994: 98.7351% ( 2) 00:31:18.528 22094.994 - 22219.825: 98.7630% ( 3) 00:31:18.528 22219.825 - 22344.655: 98.7816% ( 2) 00:31:18.528 22344.655 - 22469.486: 98.8095% ( 3) 00:31:18.528 25090.926 - 25215.756: 98.8188% ( 1) 00:31:18.528 25215.756 - 25340.587: 98.8374% ( 2) 00:31:18.528 25340.587 - 25465.417: 98.8653% ( 3) 00:31:18.528 25465.417 - 25590.248: 98.8932% ( 3) 00:31:18.528 25590.248 - 25715.078: 98.9211% ( 3) 00:31:18.528 25715.078 - 25839.909: 98.9397% ( 2) 00:31:18.528 25839.909 - 25964.739: 98.9676% ( 3) 00:31:18.528 25964.739 - 26089.570: 98.9955% ( 3) 00:31:18.528 26214.400 - 26339.230: 99.0234% ( 3) 00:31:18.528 26339.230 - 26464.061: 99.0513% ( 3) 00:31:18.528 26464.061 - 26588.891: 99.0699% ( 2) 00:31:18.528 26588.891 - 26713.722: 99.0978% ( 3) 00:31:18.528 26713.722 - 26838.552: 99.1257% ( 3) 00:31:18.528 26838.552 - 26963.383: 99.1536% ( 3) 00:31:18.528 26963.383 - 27088.213: 99.1722% ( 2) 00:31:18.528 27088.213 - 27213.044: 99.2001% ( 3) 00:31:18.528 27213.044 - 27337.874: 99.2281% ( 3) 00:31:18.528 27337.874 - 27462.705: 99.2560% ( 3) 00:31:18.528 27462.705 - 27587.535: 99.2839% ( 3) 00:31:18.528 27587.535 - 27712.366: 99.3025% ( 2) 00:31:18.528 27712.366 - 27837.196: 99.3304% ( 3) 00:31:18.528 27837.196 - 27962.027: 99.3490% ( 2) 00:31:18.528 27962.027 - 28086.857: 99.3769% ( 3) 00:31:18.528 28086.857 - 28211.688: 99.4048% ( 3) 00:31:18.528 33953.890 - 34203.550: 99.4141% ( 1) 00:31:18.528 34203.550 - 34453.211: 99.4606% ( 5) 00:31:18.528 34453.211 - 34702.872: 99.5257% ( 7) 00:31:18.528 34702.872 - 34952.533: 99.5722% ( 5) 00:31:18.528 34952.533 - 35202.194: 99.6280% ( 6) 00:31:18.528 35202.194 - 35451.855: 99.6838% ( 6) 00:31:18.528 35451.855 - 35701.516: 99.7396% ( 6) 00:31:18.528 35701.516 - 35951.177: 99.7954% ( 6) 00:31:18.528 35951.177 - 36200.838: 99.8512% ( 6) 00:31:18.528 36200.838 - 36450.499: 99.9070% ( 6) 00:31:18.528 36450.499 - 36700.160: 99.9628% ( 6) 00:31:18.528 36700.160 - 36949.821: 100.0000% ( 4) 00:31:18.528 00:31:18.528 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:31:18.528 ============================================================================== 00:31:18.528 Range in us Cumulative IO count 00:31:18.528 8426.057 - 8488.472: 0.0093% ( 1) 00:31:18.528 8488.472 - 8550.888: 0.0744% ( 7) 00:31:18.528 8550.888 - 8613.303: 0.1674% ( 10) 00:31:18.528 8613.303 - 8675.718: 0.3255% ( 17) 00:31:18.528 8675.718 - 8738.133: 0.5766% ( 27) 00:31:18.528 8738.133 - 8800.549: 0.8650% ( 31) 00:31:18.528 8800.549 - 8862.964: 1.2184% ( 38) 00:31:18.528 8862.964 - 8925.379: 1.7206% ( 54) 00:31:18.528 8925.379 - 8987.794: 2.1577% ( 47) 00:31:18.528 8987.794 - 9050.210: 2.9018% ( 80) 00:31:18.528 9050.210 - 9112.625: 3.6737% ( 83) 00:31:18.528 9112.625 - 9175.040: 4.5015% ( 89) 00:31:18.528 9175.040 - 9237.455: 5.4129% ( 98) 00:31:18.528 9237.455 - 9299.870: 6.4639% ( 113) 00:31:18.528 9299.870 - 9362.286: 7.6451% ( 127) 00:31:18.528 9362.286 - 9424.701: 8.8914% ( 134) 00:31:18.528 9424.701 - 9487.116: 10.1283% ( 133) 00:31:18.528 9487.116 - 9549.531: 11.4676% ( 144) 00:31:18.528 9549.531 - 9611.947: 12.8627% ( 150) 00:31:18.528 9611.947 - 9674.362: 14.1648% ( 140) 00:31:18.528 9674.362 - 9736.777: 15.4576% ( 139) 00:31:18.528 9736.777 - 9799.192: 16.6760% ( 131) 00:31:18.528 9799.192 - 9861.608: 17.9315% ( 135) 00:31:18.528 9861.608 - 9924.023: 19.1499% ( 131) 00:31:18.528 9924.023 - 9986.438: 20.2846% ( 122) 00:31:18.528 9986.438 - 10048.853: 21.4286% ( 123) 00:31:18.528 10048.853 - 10111.269: 22.5167% ( 117) 00:31:18.528 10111.269 - 10173.684: 23.4840% ( 104) 00:31:18.528 10173.684 - 10236.099: 24.2560% ( 83) 00:31:18.528 10236.099 - 10298.514: 25.0651% ( 87) 00:31:18.528 10298.514 - 10360.930: 25.8092% ( 80) 00:31:18.528 10360.930 - 10423.345: 26.5718% ( 82) 00:31:18.528 10423.345 - 10485.760: 27.3438% ( 83) 00:31:18.528 10485.760 - 10548.175: 28.1529% ( 87) 00:31:18.529 10548.175 - 10610.590: 29.2504% ( 118) 00:31:18.529 10610.590 - 10673.006: 30.5711% ( 142) 00:31:18.529 10673.006 - 10735.421: 31.9754% ( 151) 00:31:18.529 10735.421 - 10797.836: 33.6496% ( 180) 00:31:18.529 10797.836 - 10860.251: 35.3981% ( 188) 00:31:18.529 10860.251 - 10922.667: 37.3419% ( 209) 00:31:18.529 10922.667 - 10985.082: 39.4717% ( 229) 00:31:18.529 10985.082 - 11047.497: 41.7039% ( 240) 00:31:18.529 11047.497 - 11109.912: 44.1592% ( 264) 00:31:18.529 11109.912 - 11172.328: 46.6239% ( 265) 00:31:18.529 11172.328 - 11234.743: 49.0420% ( 260) 00:31:18.529 11234.743 - 11297.158: 51.3579% ( 249) 00:31:18.529 11297.158 - 11359.573: 53.6644% ( 248) 00:31:18.529 11359.573 - 11421.989: 55.9338% ( 244) 00:31:18.529 11421.989 - 11484.404: 58.1845% ( 242) 00:31:18.529 11484.404 - 11546.819: 60.2121% ( 218) 00:31:18.529 11546.819 - 11609.234: 61.8490% ( 176) 00:31:18.529 11609.234 - 11671.650: 63.5138% ( 179) 00:31:18.529 11671.650 - 11734.065: 65.0670% ( 167) 00:31:18.529 11734.065 - 11796.480: 66.4621% ( 150) 00:31:18.529 11796.480 - 11858.895: 67.5595% ( 118) 00:31:18.529 11858.895 - 11921.310: 68.5361% ( 105) 00:31:18.529 11921.310 - 11983.726: 69.4289% ( 96) 00:31:18.529 11983.726 - 12046.141: 70.2009% ( 83) 00:31:18.529 12046.141 - 12108.556: 70.8054% ( 65) 00:31:18.529 12108.556 - 12170.971: 71.3821% ( 62) 00:31:18.529 12170.971 - 12233.387: 71.9773% ( 64) 00:31:18.529 12233.387 - 12295.802: 72.5074% ( 57) 00:31:18.529 12295.802 - 12358.217: 73.0376% ( 57) 00:31:18.529 12358.217 - 12420.632: 73.6421% ( 65) 00:31:18.529 12420.632 - 12483.048: 74.2374% ( 64) 00:31:18.529 12483.048 - 12545.463: 74.9070% ( 72) 00:31:18.529 12545.463 - 12607.878: 75.5673% ( 71) 00:31:18.529 12607.878 - 12670.293: 76.2277% ( 71) 00:31:18.529 12670.293 - 12732.709: 76.8229% ( 64) 00:31:18.529 12732.709 - 12795.124: 77.3810% ( 60) 00:31:18.529 12795.124 - 12857.539: 77.9669% ( 63) 00:31:18.529 12857.539 - 12919.954: 78.5435% ( 62) 00:31:18.529 12919.954 - 12982.370: 79.1202% ( 62) 00:31:18.529 12982.370 - 13044.785: 79.6875% ( 61) 00:31:18.529 13044.785 - 13107.200: 80.2548% ( 61) 00:31:18.529 13107.200 - 13169.615: 80.8222% ( 61) 00:31:18.529 13169.615 - 13232.030: 81.3988% ( 62) 00:31:18.529 13232.030 - 13294.446: 81.9847% ( 63) 00:31:18.529 13294.446 - 13356.861: 82.5428% ( 60) 00:31:18.529 13356.861 - 13419.276: 83.1566% ( 66) 00:31:18.529 13419.276 - 13481.691: 83.7333% ( 62) 00:31:18.529 13481.691 - 13544.107: 84.1983% ( 50) 00:31:18.529 13544.107 - 13606.522: 84.6912% ( 53) 00:31:18.529 13606.522 - 13668.937: 85.1283% ( 47) 00:31:18.529 13668.937 - 13731.352: 85.5841% ( 49) 00:31:18.529 13731.352 - 13793.768: 86.0119% ( 46) 00:31:18.529 13793.768 - 13856.183: 86.3653% ( 38) 00:31:18.529 13856.183 - 13918.598: 86.7281% ( 39) 00:31:18.529 13918.598 - 13981.013: 87.1094% ( 41) 00:31:18.529 13981.013 - 14043.429: 87.4628% ( 38) 00:31:18.529 14043.429 - 14105.844: 87.7883% ( 35) 00:31:18.529 14105.844 - 14168.259: 88.1696% ( 41) 00:31:18.529 14168.259 - 14230.674: 88.4859% ( 34) 00:31:18.529 14230.674 - 14293.090: 88.7742% ( 31) 00:31:18.529 14293.090 - 14355.505: 89.0532% ( 30) 00:31:18.529 14355.505 - 14417.920: 89.2578% ( 22) 00:31:18.529 14417.920 - 14480.335: 89.4903% ( 25) 00:31:18.529 14480.335 - 14542.750: 89.6949% ( 22) 00:31:18.529 14542.750 - 14605.166: 89.8996% ( 22) 00:31:18.529 14605.166 - 14667.581: 90.0763% ( 19) 00:31:18.529 14667.581 - 14729.996: 90.2623% ( 20) 00:31:18.529 14729.996 - 14792.411: 90.4204% ( 17) 00:31:18.529 14792.411 - 14854.827: 90.5785% ( 17) 00:31:18.529 14854.827 - 14917.242: 90.6994% ( 13) 00:31:18.529 14917.242 - 14979.657: 90.8110% ( 12) 00:31:18.529 14979.657 - 15042.072: 90.9412% ( 14) 00:31:18.529 15042.072 - 15104.488: 91.0528% ( 12) 00:31:18.529 15104.488 - 15166.903: 91.1551% ( 11) 00:31:18.529 15166.903 - 15229.318: 91.3039% ( 16) 00:31:18.529 15229.318 - 15291.733: 91.4342% ( 14) 00:31:18.529 15291.733 - 15354.149: 91.5551% ( 13) 00:31:18.529 15354.149 - 15416.564: 91.6946% ( 15) 00:31:18.529 15416.564 - 15478.979: 91.8155% ( 13) 00:31:18.529 15478.979 - 15541.394: 91.9178% ( 11) 00:31:18.529 15541.394 - 15603.810: 92.0294% ( 12) 00:31:18.529 15603.810 - 15666.225: 92.1596% ( 14) 00:31:18.529 15666.225 - 15728.640: 92.2526% ( 10) 00:31:18.529 15728.640 - 15791.055: 92.3642% ( 12) 00:31:18.529 15791.055 - 15853.470: 92.4851% ( 13) 00:31:18.529 15853.470 - 15915.886: 92.6153% ( 14) 00:31:18.529 15915.886 - 15978.301: 92.8106% ( 21) 00:31:18.529 15978.301 - 16103.131: 93.2199% ( 44) 00:31:18.529 16103.131 - 16227.962: 93.6012% ( 41) 00:31:18.529 16227.962 - 16352.792: 93.9453% ( 37) 00:31:18.529 16352.792 - 16477.623: 94.3266% ( 41) 00:31:18.529 16477.623 - 16602.453: 94.7545% ( 46) 00:31:18.529 16602.453 - 16727.284: 95.2102% ( 49) 00:31:18.529 16727.284 - 16852.114: 95.6473% ( 47) 00:31:18.529 16852.114 - 16976.945: 96.1403% ( 53) 00:31:18.529 16976.945 - 17101.775: 96.5216% ( 41) 00:31:18.529 17101.775 - 17226.606: 96.8099% ( 31) 00:31:18.529 17226.606 - 17351.436: 97.0889% ( 30) 00:31:18.529 17351.436 - 17476.267: 97.2098% ( 13) 00:31:18.529 17476.267 - 17601.097: 97.3121% ( 11) 00:31:18.529 17601.097 - 17725.928: 97.4144% ( 11) 00:31:18.529 17725.928 - 17850.758: 97.5260% ( 12) 00:31:18.529 17850.758 - 17975.589: 97.6097% ( 9) 00:31:18.529 17975.589 - 18100.419: 97.6190% ( 1) 00:31:18.529 18974.232 - 19099.063: 97.6469% ( 3) 00:31:18.529 19099.063 - 19223.893: 97.6749% ( 3) 00:31:18.529 19223.893 - 19348.724: 97.7121% ( 4) 00:31:18.529 19348.724 - 19473.554: 97.7400% ( 3) 00:31:18.529 19473.554 - 19598.385: 97.7679% ( 3) 00:31:18.529 19598.385 - 19723.215: 97.8051% ( 4) 00:31:18.529 19723.215 - 19848.046: 97.8702% ( 7) 00:31:18.529 19848.046 - 19972.876: 97.9260% ( 6) 00:31:18.529 19972.876 - 20097.707: 97.9911% ( 7) 00:31:18.529 20097.707 - 20222.537: 98.0469% ( 6) 00:31:18.529 20222.537 - 20347.368: 98.0934% ( 5) 00:31:18.529 20347.368 - 20472.198: 98.1492% ( 6) 00:31:18.529 20472.198 - 20597.029: 98.1957% ( 5) 00:31:18.529 20597.029 - 20721.859: 98.2608% ( 7) 00:31:18.529 20721.859 - 20846.690: 98.3166% ( 6) 00:31:18.529 20846.690 - 20971.520: 98.3817% ( 7) 00:31:18.529 20971.520 - 21096.350: 98.4375% ( 6) 00:31:18.529 21096.350 - 21221.181: 98.5026% ( 7) 00:31:18.529 21221.181 - 21346.011: 98.5491% ( 5) 00:31:18.529 21346.011 - 21470.842: 98.5863% ( 4) 00:31:18.529 21470.842 - 21595.672: 98.6142% ( 3) 00:31:18.529 21595.672 - 21720.503: 98.6328% ( 2) 00:31:18.529 21720.503 - 21845.333: 98.6607% ( 3) 00:31:18.529 21845.333 - 21970.164: 98.6886% ( 3) 00:31:18.529 21970.164 - 22094.994: 98.7165% ( 3) 00:31:18.529 22094.994 - 22219.825: 98.7444% ( 3) 00:31:18.529 22219.825 - 22344.655: 98.7816% ( 4) 00:31:18.529 22344.655 - 22469.486: 98.8560% ( 8) 00:31:18.529 22469.486 - 22594.316: 98.9025% ( 5) 00:31:18.529 22594.316 - 22719.147: 98.9211% ( 2) 00:31:18.529 22719.147 - 22843.977: 98.9397% ( 2) 00:31:18.529 22843.977 - 22968.808: 98.9676% ( 3) 00:31:18.529 22968.808 - 23093.638: 98.9955% ( 3) 00:31:18.529 23093.638 - 23218.469: 99.0234% ( 3) 00:31:18.529 23218.469 - 23343.299: 99.0513% ( 3) 00:31:18.529 23343.299 - 23468.130: 99.0792% ( 3) 00:31:18.529 23468.130 - 23592.960: 99.0978% ( 2) 00:31:18.529 23592.960 - 23717.790: 99.1257% ( 3) 00:31:18.529 23717.790 - 23842.621: 99.1536% ( 3) 00:31:18.529 23842.621 - 23967.451: 99.1815% ( 3) 00:31:18.529 23967.451 - 24092.282: 99.2094% ( 3) 00:31:18.529 24092.282 - 24217.112: 99.2374% ( 3) 00:31:18.529 24217.112 - 24341.943: 99.2653% ( 3) 00:31:18.529 24341.943 - 24466.773: 99.2932% ( 3) 00:31:18.529 24466.773 - 24591.604: 99.3118% ( 2) 00:31:18.529 24591.604 - 24716.434: 99.3397% ( 3) 00:31:18.529 24716.434 - 24841.265: 99.3676% ( 3) 00:31:18.529 24841.265 - 24966.095: 99.3955% ( 3) 00:31:18.529 24966.095 - 25090.926: 99.4048% ( 1) 00:31:18.529 30833.128 - 30957.958: 99.4234% ( 2) 00:31:18.529 30957.958 - 31082.789: 99.4513% ( 3) 00:31:18.529 31082.789 - 31207.619: 99.4792% ( 3) 00:31:18.530 31207.619 - 31332.450: 99.5071% ( 3) 00:31:18.530 31332.450 - 31457.280: 99.5350% ( 3) 00:31:18.530 31457.280 - 31582.110: 99.5629% ( 3) 00:31:18.530 31582.110 - 31706.941: 99.6001% ( 4) 00:31:18.530 31706.941 - 31831.771: 99.6280% ( 3) 00:31:18.530 31831.771 - 31956.602: 99.6559% ( 3) 00:31:18.530 31956.602 - 32206.263: 99.7117% ( 6) 00:31:18.530 32206.263 - 32455.924: 99.7675% ( 6) 00:31:18.530 32455.924 - 32705.585: 99.8233% ( 6) 00:31:18.530 32705.585 - 32955.246: 99.8698% ( 5) 00:31:18.530 32955.246 - 33204.907: 99.9256% ( 6) 00:31:18.530 33204.907 - 33454.568: 99.9907% ( 7) 00:31:18.530 33454.568 - 33704.229: 100.0000% ( 1) 00:31:18.530 00:31:18.530 13:26:11 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:31:19.908 Initializing NVMe Controllers 00:31:19.908 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:19.908 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:19.908 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:19.908 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:19.908 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:19.908 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:19.908 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:19.908 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:19.908 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:19.908 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:19.908 Initialization complete. Launching workers. 00:31:19.908 ======================================================== 00:31:19.908 Latency(us) 00:31:19.908 Device Information : IOPS MiB/s Average min max 00:31:19.908 PCIE (0000:00:10.0) NSID 1 from core 0: 8988.08 105.33 14292.53 10541.21 51913.17 00:31:19.908 PCIE (0000:00:11.0) NSID 1 from core 0: 8988.08 105.33 14252.62 10592.04 48264.93 00:31:19.908 PCIE (0000:00:13.0) NSID 1 from core 0: 8988.08 105.33 14212.50 10671.86 45317.74 00:31:19.908 PCIE (0000:00:12.0) NSID 1 from core 0: 8988.08 105.33 14172.87 10657.25 41684.90 00:31:19.908 PCIE (0000:00:12.0) NSID 2 from core 0: 8988.08 105.33 14131.90 10469.25 38173.27 00:31:19.908 PCIE (0000:00:12.0) NSID 3 from core 0: 8988.08 105.33 14089.22 10721.13 34479.75 00:31:19.908 ======================================================== 00:31:19.908 Total : 53928.50 631.97 14191.94 10469.25 51913.17 00:31:19.908 00:31:19.908 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:31:19.908 ================================================================================= 00:31:19.908 1.00000% : 10860.251us 00:31:19.908 10.00000% : 11734.065us 00:31:19.908 25.00000% : 12420.632us 00:31:19.908 50.00000% : 13419.276us 00:31:19.908 75.00000% : 14854.827us 00:31:19.908 90.00000% : 16976.945us 00:31:19.908 95.00000% : 19099.063us 00:31:19.908 98.00000% : 21221.181us 00:31:19.908 99.00000% : 40694.735us 00:31:19.908 99.50000% : 49682.530us 00:31:19.908 99.90000% : 51430.156us 00:31:19.908 99.99000% : 51929.478us 00:31:19.908 99.99900% : 51929.478us 00:31:19.908 99.99990% : 51929.478us 00:31:19.908 99.99999% : 51929.478us 00:31:19.908 00:31:19.908 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:31:19.908 ================================================================================= 00:31:19.908 1.00000% : 11047.497us 00:31:19.908 10.00000% : 11734.065us 00:31:19.908 25.00000% : 12420.632us 00:31:19.908 50.00000% : 13419.276us 00:31:19.908 75.00000% : 14979.657us 00:31:19.908 90.00000% : 16727.284us 00:31:19.908 95.00000% : 18849.402us 00:31:19.908 98.00000% : 21470.842us 00:31:19.908 99.00000% : 38198.126us 00:31:19.908 99.50000% : 46187.276us 00:31:19.908 99.90000% : 47934.903us 00:31:19.908 99.99000% : 48434.225us 00:31:19.908 99.99900% : 48434.225us 00:31:19.908 99.99990% : 48434.225us 00:31:19.908 99.99999% : 48434.225us 00:31:19.908 00:31:19.908 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:31:19.908 ================================================================================= 00:31:19.908 1.00000% : 10985.082us 00:31:19.908 10.00000% : 11734.065us 00:31:19.908 25.00000% : 12483.048us 00:31:19.908 50.00000% : 13356.861us 00:31:19.908 75.00000% : 14917.242us 00:31:19.908 90.00000% : 16976.945us 00:31:19.908 95.00000% : 18724.571us 00:31:19.908 98.00000% : 21346.011us 00:31:19.908 99.00000% : 35202.194us 00:31:19.908 99.50000% : 43191.345us 00:31:19.908 99.90000% : 44938.971us 00:31:19.908 99.99000% : 45438.293us 00:31:19.908 99.99900% : 45438.293us 00:31:19.908 99.99990% : 45438.293us 00:31:19.908 99.99999% : 45438.293us 00:31:19.908 00:31:19.908 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:31:19.908 ================================================================================= 00:31:19.908 1.00000% : 11047.497us 00:31:19.909 10.00000% : 11734.065us 00:31:19.909 25.00000% : 12483.048us 00:31:19.909 50.00000% : 13356.861us 00:31:19.909 75.00000% : 14917.242us 00:31:19.909 90.00000% : 16976.945us 00:31:19.909 95.00000% : 19099.063us 00:31:19.909 98.00000% : 21470.842us 00:31:19.909 99.00000% : 31706.941us 00:31:19.909 99.50000% : 39696.091us 00:31:19.909 99.90000% : 41443.718us 00:31:19.909 99.99000% : 41693.379us 00:31:19.909 99.99900% : 41693.379us 00:31:19.909 99.99990% : 41693.379us 00:31:19.909 99.99999% : 41693.379us 00:31:19.909 00:31:19.909 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:31:19.909 ================================================================================= 00:31:19.909 1.00000% : 11047.497us 00:31:19.909 10.00000% : 11796.480us 00:31:19.909 25.00000% : 12483.048us 00:31:19.909 50.00000% : 13356.861us 00:31:19.909 75.00000% : 14917.242us 00:31:19.909 90.00000% : 17351.436us 00:31:19.909 95.00000% : 19223.893us 00:31:19.909 98.00000% : 21970.164us 00:31:19.909 99.00000% : 28086.857us 00:31:19.909 99.50000% : 36200.838us 00:31:19.909 99.90000% : 37948.465us 00:31:19.909 99.99000% : 38198.126us 00:31:19.909 99.99900% : 38198.126us 00:31:19.909 99.99990% : 38198.126us 00:31:19.909 99.99999% : 38198.126us 00:31:19.909 00:31:19.909 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:31:19.909 ================================================================================= 00:31:19.909 1.00000% : 11047.497us 00:31:19.909 10.00000% : 11858.895us 00:31:19.909 25.00000% : 12483.048us 00:31:19.909 50.00000% : 13356.861us 00:31:19.909 75.00000% : 14854.827us 00:31:19.909 90.00000% : 17476.267us 00:31:19.909 95.00000% : 19348.724us 00:31:19.909 98.00000% : 21595.672us 00:31:19.909 99.00000% : 24092.282us 00:31:19.909 99.50000% : 32455.924us 00:31:19.909 99.90000% : 34203.550us 00:31:19.909 99.99000% : 34702.872us 00:31:19.909 99.99900% : 34702.872us 00:31:19.909 99.99990% : 34702.872us 00:31:19.909 99.99999% : 34702.872us 00:31:19.909 00:31:19.909 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:31:19.909 ============================================================================== 00:31:19.909 Range in us Cumulative IO count 00:31:19.909 10485.760 - 10548.175: 0.0111% ( 1) 00:31:19.909 10548.175 - 10610.590: 0.1441% ( 12) 00:31:19.909 10610.590 - 10673.006: 0.2327% ( 8) 00:31:19.909 10673.006 - 10735.421: 0.2770% ( 4) 00:31:19.909 10735.421 - 10797.836: 0.7979% ( 47) 00:31:19.909 10797.836 - 10860.251: 1.5736% ( 70) 00:31:19.909 10860.251 - 10922.667: 1.8506% ( 25) 00:31:19.909 10922.667 - 10985.082: 2.1831% ( 30) 00:31:19.909 10985.082 - 11047.497: 2.6596% ( 43) 00:31:19.909 11047.497 - 11109.912: 2.9809% ( 29) 00:31:19.909 11109.912 - 11172.328: 3.4353% ( 41) 00:31:19.909 11172.328 - 11234.743: 4.1445% ( 64) 00:31:19.909 11234.743 - 11297.158: 4.7983% ( 59) 00:31:19.909 11297.158 - 11359.573: 5.4078% ( 55) 00:31:19.909 11359.573 - 11421.989: 5.8954% ( 44) 00:31:19.909 11421.989 - 11484.404: 6.7265% ( 75) 00:31:19.909 11484.404 - 11546.819: 7.5355% ( 73) 00:31:19.909 11546.819 - 11609.234: 8.3223% ( 71) 00:31:19.909 11609.234 - 11671.650: 9.1423% ( 74) 00:31:19.909 11671.650 - 11734.065: 10.3834% ( 112) 00:31:19.909 11734.065 - 11796.480: 11.5470% ( 105) 00:31:19.909 11796.480 - 11858.895: 12.6330% ( 98) 00:31:19.909 11858.895 - 11921.310: 13.8852% ( 113) 00:31:19.909 11921.310 - 11983.726: 15.1817% ( 117) 00:31:19.909 11983.726 - 12046.141: 16.4229% ( 112) 00:31:19.909 12046.141 - 12108.556: 18.1738% ( 158) 00:31:19.909 12108.556 - 12170.971: 19.6587% ( 134) 00:31:19.909 12170.971 - 12233.387: 20.9996% ( 121) 00:31:19.909 12233.387 - 12295.802: 22.3404% ( 121) 00:31:19.909 12295.802 - 12358.217: 23.7256% ( 125) 00:31:19.909 12358.217 - 12420.632: 25.2105% ( 134) 00:31:19.909 12420.632 - 12483.048: 26.6733% ( 132) 00:31:19.909 12483.048 - 12545.463: 28.3245% ( 149) 00:31:19.909 12545.463 - 12607.878: 30.0975% ( 160) 00:31:19.909 12607.878 - 12670.293: 31.7265% ( 147) 00:31:19.909 12670.293 - 12732.709: 33.4441% ( 155) 00:31:19.909 12732.709 - 12795.124: 34.9069% ( 132) 00:31:19.909 12795.124 - 12857.539: 36.5027% ( 144) 00:31:19.909 12857.539 - 12919.954: 38.0984% ( 144) 00:31:19.909 12919.954 - 12982.370: 39.5723% ( 133) 00:31:19.909 12982.370 - 13044.785: 41.1680% ( 144) 00:31:19.909 13044.785 - 13107.200: 42.5532% ( 125) 00:31:19.909 13107.200 - 13169.615: 44.1268% ( 142) 00:31:19.909 13169.615 - 13232.030: 45.6449% ( 137) 00:31:19.909 13232.030 - 13294.446: 47.2296% ( 143) 00:31:19.909 13294.446 - 13356.861: 48.6702% ( 130) 00:31:19.909 13356.861 - 13419.276: 50.1108% ( 130) 00:31:19.909 13419.276 - 13481.691: 51.7620% ( 149) 00:31:19.909 13481.691 - 13544.107: 53.0142% ( 113) 00:31:19.909 13544.107 - 13606.522: 54.2221% ( 109) 00:31:19.909 13606.522 - 13668.937: 55.3967% ( 106) 00:31:19.909 13668.937 - 13731.352: 56.5160% ( 101) 00:31:19.909 13731.352 - 13793.768: 57.5355% ( 92) 00:31:19.909 13793.768 - 13856.183: 58.5550% ( 92) 00:31:19.909 13856.183 - 13918.598: 59.7961% ( 112) 00:31:19.909 13918.598 - 13981.013: 61.1037% ( 118) 00:31:19.909 13981.013 - 14043.429: 62.1676% ( 96) 00:31:19.909 14043.429 - 14105.844: 63.3533% ( 107) 00:31:19.909 14105.844 - 14168.259: 64.4725% ( 101) 00:31:19.909 14168.259 - 14230.674: 65.3812% ( 82) 00:31:19.909 14230.674 - 14293.090: 66.4340% ( 95) 00:31:19.909 14293.090 - 14355.505: 67.5643% ( 102) 00:31:19.909 14355.505 - 14417.920: 68.7500% ( 107) 00:31:19.909 14417.920 - 14480.335: 69.7030% ( 86) 00:31:19.909 14480.335 - 14542.750: 70.7225% ( 92) 00:31:19.909 14542.750 - 14605.166: 71.5647% ( 76) 00:31:19.909 14605.166 - 14667.581: 72.3737% ( 73) 00:31:19.909 14667.581 - 14729.996: 73.3821% ( 91) 00:31:19.909 14729.996 - 14792.411: 74.2797% ( 81) 00:31:19.909 14792.411 - 14854.827: 75.1108% ( 75) 00:31:19.909 14854.827 - 14917.242: 76.0195% ( 82) 00:31:19.909 14917.242 - 14979.657: 76.7620% ( 67) 00:31:19.909 14979.657 - 15042.072: 77.5931% ( 75) 00:31:19.909 15042.072 - 15104.488: 78.3688% ( 70) 00:31:19.909 15104.488 - 15166.903: 79.1002% ( 66) 00:31:19.909 15166.903 - 15229.318: 79.8870% ( 71) 00:31:19.909 15229.318 - 15291.733: 80.5519% ( 60) 00:31:19.909 15291.733 - 15354.149: 81.1392% ( 53) 00:31:19.909 15354.149 - 15416.564: 81.6379% ( 45) 00:31:19.909 15416.564 - 15478.979: 82.1365% ( 45) 00:31:19.909 15478.979 - 15541.394: 82.6241% ( 44) 00:31:19.909 15541.394 - 15603.810: 83.1228% ( 45) 00:31:19.909 15603.810 - 15666.225: 83.5217% ( 36) 00:31:19.909 15666.225 - 15728.640: 84.0204% ( 45) 00:31:19.909 15728.640 - 15791.055: 84.4526% ( 39) 00:31:19.909 15791.055 - 15853.470: 84.9291% ( 43) 00:31:19.909 15853.470 - 15915.886: 85.3613% ( 39) 00:31:19.909 15915.886 - 15978.301: 85.8045% ( 40) 00:31:19.909 15978.301 - 16103.131: 86.6578% ( 77) 00:31:19.909 16103.131 - 16227.962: 87.2895% ( 57) 00:31:19.909 16227.962 - 16352.792: 87.9322% ( 58) 00:31:19.909 16352.792 - 16477.623: 88.5195% ( 53) 00:31:19.909 16477.623 - 16602.453: 89.0514% ( 48) 00:31:19.909 16602.453 - 16727.284: 89.4504% ( 36) 00:31:19.909 16727.284 - 16852.114: 89.7717% ( 29) 00:31:19.909 16852.114 - 16976.945: 90.0931% ( 29) 00:31:19.909 16976.945 - 17101.775: 90.5585% ( 42) 00:31:19.909 17101.775 - 17226.606: 90.9020% ( 31) 00:31:19.909 17226.606 - 17351.436: 91.1237% ( 20) 00:31:19.909 17351.436 - 17476.267: 91.3231% ( 18) 00:31:19.909 17476.267 - 17601.097: 91.5891% ( 24) 00:31:19.910 17601.097 - 17725.928: 91.8994% ( 28) 00:31:19.910 17725.928 - 17850.758: 92.1653% ( 24) 00:31:19.910 17850.758 - 17975.589: 92.4202% ( 23) 00:31:19.910 17975.589 - 18100.419: 92.6640% ( 22) 00:31:19.910 18100.419 - 18225.250: 92.9965% ( 30) 00:31:19.910 18225.250 - 18350.080: 93.2292% ( 21) 00:31:19.910 18350.080 - 18474.910: 93.4286% ( 18) 00:31:19.910 18474.910 - 18599.741: 93.7057% ( 25) 00:31:19.910 18599.741 - 18724.571: 94.0714% ( 33) 00:31:19.910 18724.571 - 18849.402: 94.4814% ( 37) 00:31:19.910 18849.402 - 18974.232: 94.7695% ( 26) 00:31:19.910 18974.232 - 19099.063: 95.0798% ( 28) 00:31:19.910 19099.063 - 19223.893: 95.4122% ( 30) 00:31:19.910 19223.893 - 19348.724: 95.7004% ( 26) 00:31:19.910 19348.724 - 19473.554: 95.9441% ( 22) 00:31:19.910 19473.554 - 19598.385: 96.2544% ( 28) 00:31:19.910 19598.385 - 19723.215: 96.4871% ( 21) 00:31:19.910 19723.215 - 19848.046: 96.7309% ( 22) 00:31:19.910 19848.046 - 19972.876: 96.8750% ( 13) 00:31:19.910 19972.876 - 20097.707: 96.9969% ( 11) 00:31:19.910 20097.707 - 20222.537: 97.1520% ( 14) 00:31:19.910 20222.537 - 20347.368: 97.2739% ( 11) 00:31:19.910 20347.368 - 20472.198: 97.4069% ( 12) 00:31:19.910 20472.198 - 20597.029: 97.5510% ( 13) 00:31:19.910 20597.029 - 20721.859: 97.6618% ( 10) 00:31:19.910 20721.859 - 20846.690: 97.7726% ( 10) 00:31:19.910 20846.690 - 20971.520: 97.8613% ( 8) 00:31:19.910 20971.520 - 21096.350: 97.9721% ( 10) 00:31:19.910 21096.350 - 21221.181: 98.0829% ( 10) 00:31:19.910 21221.181 - 21346.011: 98.1826% ( 9) 00:31:19.910 21346.011 - 21470.842: 98.2713% ( 8) 00:31:19.910 21470.842 - 21595.672: 98.3488% ( 7) 00:31:19.910 21595.672 - 21720.503: 98.4153% ( 6) 00:31:19.910 21720.503 - 21845.333: 98.4264% ( 1) 00:31:19.910 21970.164 - 22094.994: 98.4597% ( 3) 00:31:19.910 22094.994 - 22219.825: 98.4707% ( 1) 00:31:19.910 22219.825 - 22344.655: 98.4818% ( 1) 00:31:19.910 22344.655 - 22469.486: 98.5040% ( 2) 00:31:19.910 22469.486 - 22594.316: 98.5262% ( 2) 00:31:19.910 22594.316 - 22719.147: 98.5372% ( 1) 00:31:19.910 22719.147 - 22843.977: 98.5594% ( 2) 00:31:19.910 22843.977 - 22968.808: 98.5705% ( 1) 00:31:19.910 22968.808 - 23093.638: 98.5816% ( 1) 00:31:19.910 38447.787 - 38697.448: 98.6259% ( 4) 00:31:19.910 38697.448 - 38947.109: 98.6813% ( 5) 00:31:19.910 38947.109 - 39196.770: 98.7367% ( 5) 00:31:19.910 39196.770 - 39446.430: 98.7921% ( 5) 00:31:19.910 39446.430 - 39696.091: 98.8254% ( 3) 00:31:19.910 39696.091 - 39945.752: 98.8808% ( 5) 00:31:19.910 39945.752 - 40195.413: 98.9251% ( 4) 00:31:19.910 40195.413 - 40445.074: 98.9805% ( 5) 00:31:19.910 40445.074 - 40694.735: 99.0359% ( 5) 00:31:19.910 40694.735 - 40944.396: 99.0913% ( 5) 00:31:19.910 40944.396 - 41194.057: 99.1246% ( 3) 00:31:19.910 41194.057 - 41443.718: 99.1800% ( 5) 00:31:19.910 41443.718 - 41693.379: 99.2354% ( 5) 00:31:19.910 41693.379 - 41943.040: 99.2908% ( 5) 00:31:19.910 48434.225 - 48683.886: 99.3129% ( 2) 00:31:19.910 48683.886 - 48933.547: 99.3573% ( 4) 00:31:19.910 48933.547 - 49183.208: 99.4127% ( 5) 00:31:19.910 49183.208 - 49432.869: 99.4681% ( 5) 00:31:19.910 49432.869 - 49682.530: 99.5235% ( 5) 00:31:19.910 49682.530 - 49932.190: 99.5900% ( 6) 00:31:19.910 49932.190 - 50181.851: 99.6343% ( 4) 00:31:19.910 50181.851 - 50431.512: 99.6897% ( 5) 00:31:19.910 50431.512 - 50681.173: 99.7451% ( 5) 00:31:19.910 50681.173 - 50930.834: 99.7895% ( 4) 00:31:19.910 50930.834 - 51180.495: 99.8449% ( 5) 00:31:19.910 51180.495 - 51430.156: 99.9003% ( 5) 00:31:19.910 51430.156 - 51679.817: 99.9557% ( 5) 00:31:19.910 51679.817 - 51929.478: 100.0000% ( 4) 00:31:19.910 00:31:19.910 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:31:19.910 ============================================================================== 00:31:19.910 Range in us Cumulative IO count 00:31:19.910 10548.175 - 10610.590: 0.0111% ( 1) 00:31:19.910 10735.421 - 10797.836: 0.0887% ( 7) 00:31:19.910 10797.836 - 10860.251: 0.2105% ( 11) 00:31:19.910 10860.251 - 10922.667: 0.3435% ( 12) 00:31:19.910 10922.667 - 10985.082: 0.7314% ( 35) 00:31:19.910 10985.082 - 11047.497: 1.2190% ( 44) 00:31:19.910 11047.497 - 11109.912: 1.9171% ( 63) 00:31:19.910 11109.912 - 11172.328: 2.5820% ( 60) 00:31:19.910 11172.328 - 11234.743: 3.5572% ( 88) 00:31:19.910 11234.743 - 11297.158: 4.5213% ( 87) 00:31:19.910 11297.158 - 11359.573: 5.4300% ( 82) 00:31:19.910 11359.573 - 11421.989: 6.1503% ( 65) 00:31:19.910 11421.989 - 11484.404: 6.9481% ( 72) 00:31:19.910 11484.404 - 11546.819: 7.7238% ( 70) 00:31:19.910 11546.819 - 11609.234: 8.7766% ( 95) 00:31:19.910 11609.234 - 11671.650: 9.8626% ( 98) 00:31:19.910 11671.650 - 11734.065: 10.8378% ( 88) 00:31:19.910 11734.065 - 11796.480: 11.7465% ( 82) 00:31:19.910 11796.480 - 11858.895: 12.5554% ( 73) 00:31:19.910 11858.895 - 11921.310: 13.5860% ( 93) 00:31:19.910 11921.310 - 11983.726: 14.4171% ( 75) 00:31:19.910 11983.726 - 12046.141: 15.5585% ( 103) 00:31:19.910 12046.141 - 12108.556: 16.7442% ( 107) 00:31:19.910 12108.556 - 12170.971: 18.1184% ( 124) 00:31:19.910 12170.971 - 12233.387: 19.6254% ( 136) 00:31:19.910 12233.387 - 12295.802: 21.3431% ( 155) 00:31:19.910 12295.802 - 12358.217: 23.4707% ( 192) 00:31:19.910 12358.217 - 12420.632: 25.4876% ( 182) 00:31:19.910 12420.632 - 12483.048: 27.2717% ( 161) 00:31:19.910 12483.048 - 12545.463: 29.4215% ( 194) 00:31:19.910 12545.463 - 12607.878: 31.3387% ( 173) 00:31:19.910 12607.878 - 12670.293: 33.1671% ( 165) 00:31:19.910 12670.293 - 12732.709: 35.2283% ( 186) 00:31:19.910 12732.709 - 12795.124: 37.1121% ( 170) 00:31:19.910 12795.124 - 12857.539: 38.8520% ( 157) 00:31:19.910 12857.539 - 12919.954: 40.5031% ( 149) 00:31:19.910 12919.954 - 12982.370: 41.8551% ( 122) 00:31:19.910 12982.370 - 13044.785: 43.2957% ( 130) 00:31:19.910 13044.785 - 13107.200: 44.6809% ( 125) 00:31:19.910 13107.200 - 13169.615: 45.9220% ( 112) 00:31:19.910 13169.615 - 13232.030: 47.2296% ( 118) 00:31:19.910 13232.030 - 13294.446: 48.5705% ( 121) 00:31:19.910 13294.446 - 13356.861: 49.9224% ( 122) 00:31:19.910 13356.861 - 13419.276: 51.2965% ( 124) 00:31:19.910 13419.276 - 13481.691: 52.4934% ( 108) 00:31:19.910 13481.691 - 13544.107: 53.7456% ( 113) 00:31:19.910 13544.107 - 13606.522: 54.7651% ( 92) 00:31:19.910 13606.522 - 13668.937: 55.7292% ( 87) 00:31:19.910 13668.937 - 13731.352: 56.7487% ( 92) 00:31:19.910 13731.352 - 13793.768: 57.6906% ( 85) 00:31:19.910 13793.768 - 13856.183: 58.6104% ( 83) 00:31:19.910 13856.183 - 13918.598: 59.6077% ( 90) 00:31:19.910 13918.598 - 13981.013: 60.6051% ( 90) 00:31:19.910 13981.013 - 14043.429: 61.5691% ( 87) 00:31:19.910 14043.429 - 14105.844: 62.7105% ( 103) 00:31:19.910 14105.844 - 14168.259: 63.9406% ( 111) 00:31:19.910 14168.259 - 14230.674: 64.8936% ( 86) 00:31:19.910 14230.674 - 14293.090: 65.7469% ( 77) 00:31:19.910 14293.090 - 14355.505: 66.6002% ( 77) 00:31:19.910 14355.505 - 14417.920: 67.4645% ( 78) 00:31:19.910 14417.920 - 14480.335: 68.2735% ( 73) 00:31:19.910 14480.335 - 14542.750: 69.0160% ( 67) 00:31:19.910 14542.750 - 14605.166: 69.8138% ( 72) 00:31:19.910 14605.166 - 14667.581: 70.7890% ( 88) 00:31:19.910 14667.581 - 14729.996: 71.7642% ( 88) 00:31:19.910 14729.996 - 14792.411: 72.8059% ( 94) 00:31:19.910 14792.411 - 14854.827: 73.8143% ( 91) 00:31:19.910 14854.827 - 14917.242: 74.7230% ( 82) 00:31:19.910 14917.242 - 14979.657: 75.6871% ( 87) 00:31:19.911 14979.657 - 15042.072: 76.7066% ( 92) 00:31:19.911 15042.072 - 15104.488: 77.6263% ( 83) 00:31:19.911 15104.488 - 15166.903: 78.5018% ( 79) 00:31:19.911 15166.903 - 15229.318: 79.3329% ( 75) 00:31:19.911 15229.318 - 15291.733: 79.9756% ( 58) 00:31:19.911 15291.733 - 15354.149: 80.5408% ( 51) 00:31:19.911 15354.149 - 15416.564: 81.1613% ( 56) 00:31:19.911 15416.564 - 15478.979: 81.7376% ( 52) 00:31:19.911 15478.979 - 15541.394: 82.3360% ( 54) 00:31:19.911 15541.394 - 15603.810: 82.9898% ( 59) 00:31:19.911 15603.810 - 15666.225: 83.6547% ( 60) 00:31:19.911 15666.225 - 15728.640: 84.2753% ( 56) 00:31:19.911 15728.640 - 15791.055: 84.9402% ( 60) 00:31:19.911 15791.055 - 15853.470: 85.4388% ( 45) 00:31:19.911 15853.470 - 15915.886: 85.9375% ( 45) 00:31:19.911 15915.886 - 15978.301: 86.3697% ( 39) 00:31:19.911 15978.301 - 16103.131: 87.2673% ( 81) 00:31:19.911 16103.131 - 16227.962: 88.0762% ( 73) 00:31:19.911 16227.962 - 16352.792: 88.8520% ( 70) 00:31:19.911 16352.792 - 16477.623: 89.2841% ( 39) 00:31:19.911 16477.623 - 16602.453: 89.7274% ( 40) 00:31:19.911 16602.453 - 16727.284: 90.0266% ( 27) 00:31:19.911 16727.284 - 16852.114: 90.2261% ( 18) 00:31:19.911 16852.114 - 16976.945: 90.3701% ( 13) 00:31:19.911 16976.945 - 17101.775: 90.4699% ( 9) 00:31:19.911 17101.775 - 17226.606: 90.8466% ( 34) 00:31:19.911 17226.606 - 17351.436: 90.9796% ( 12) 00:31:19.911 17351.436 - 17476.267: 91.1126% ( 12) 00:31:19.911 17476.267 - 17601.097: 91.2456% ( 12) 00:31:19.911 17601.097 - 17725.928: 91.4340% ( 17) 00:31:19.911 17725.928 - 17850.758: 91.7664% ( 30) 00:31:19.911 17850.758 - 17975.589: 92.4313% ( 60) 00:31:19.911 17975.589 - 18100.419: 92.9965% ( 51) 00:31:19.911 18100.419 - 18225.250: 93.3732% ( 34) 00:31:19.911 18225.250 - 18350.080: 93.7943% ( 38) 00:31:19.911 18350.080 - 18474.910: 94.2154% ( 38) 00:31:19.911 18474.910 - 18599.741: 94.4592% ( 22) 00:31:19.911 18599.741 - 18724.571: 94.7252% ( 24) 00:31:19.911 18724.571 - 18849.402: 95.0133% ( 26) 00:31:19.911 18849.402 - 18974.232: 95.2238% ( 19) 00:31:19.911 18974.232 - 19099.063: 95.4122% ( 17) 00:31:19.911 19099.063 - 19223.893: 95.6006% ( 17) 00:31:19.911 19223.893 - 19348.724: 95.8001% ( 18) 00:31:19.911 19348.724 - 19473.554: 95.9774% ( 16) 00:31:19.911 19473.554 - 19598.385: 96.0993% ( 11) 00:31:19.911 19598.385 - 19723.215: 96.1879% ( 8) 00:31:19.911 19723.215 - 19848.046: 96.2655% ( 7) 00:31:19.911 19848.046 - 19972.876: 96.2988% ( 3) 00:31:19.911 19972.876 - 20097.707: 96.3542% ( 5) 00:31:19.911 20097.707 - 20222.537: 96.5758% ( 20) 00:31:19.911 20222.537 - 20347.368: 96.7974% ( 20) 00:31:19.911 20347.368 - 20472.198: 96.9637% ( 15) 00:31:19.911 20472.198 - 20597.029: 97.1299% ( 15) 00:31:19.911 20597.029 - 20721.859: 97.2961% ( 15) 00:31:19.911 20721.859 - 20846.690: 97.4734% ( 16) 00:31:19.911 20846.690 - 20971.520: 97.6175% ( 13) 00:31:19.911 20971.520 - 21096.350: 97.7726% ( 14) 00:31:19.911 21096.350 - 21221.181: 97.8945% ( 11) 00:31:19.911 21221.181 - 21346.011: 97.9942% ( 9) 00:31:19.911 21346.011 - 21470.842: 98.0607% ( 6) 00:31:19.911 21470.842 - 21595.672: 98.1161% ( 5) 00:31:19.911 21595.672 - 21720.503: 98.1826% ( 6) 00:31:19.911 21720.503 - 21845.333: 98.2491% ( 6) 00:31:19.911 21845.333 - 21970.164: 98.2934% ( 4) 00:31:19.911 21970.164 - 22094.994: 98.3156% ( 2) 00:31:19.911 22094.994 - 22219.825: 98.3378% ( 2) 00:31:19.911 22219.825 - 22344.655: 98.3599% ( 2) 00:31:19.911 22344.655 - 22469.486: 98.3932% ( 3) 00:31:19.911 22469.486 - 22594.316: 98.4153% ( 2) 00:31:19.911 22594.316 - 22719.147: 98.4486% ( 3) 00:31:19.911 22719.147 - 22843.977: 98.4707% ( 2) 00:31:19.911 22843.977 - 22968.808: 98.5040% ( 3) 00:31:19.911 22968.808 - 23093.638: 98.5262% ( 2) 00:31:19.911 23093.638 - 23218.469: 98.5483% ( 2) 00:31:19.911 23218.469 - 23343.299: 98.5816% ( 3) 00:31:19.911 36450.499 - 36700.160: 98.7367% ( 14) 00:31:19.911 36700.160 - 36949.821: 98.7810% ( 4) 00:31:19.911 36949.821 - 37199.482: 98.8364% ( 5) 00:31:19.911 37199.482 - 37449.143: 98.8918% ( 5) 00:31:19.911 37449.143 - 37698.804: 98.9473% ( 5) 00:31:19.911 37698.804 - 37948.465: 98.9916% ( 4) 00:31:19.911 37948.465 - 38198.126: 99.0470% ( 5) 00:31:19.911 38198.126 - 38447.787: 99.1024% ( 5) 00:31:19.911 38447.787 - 38697.448: 99.1578% ( 5) 00:31:19.911 38697.448 - 38947.109: 99.2021% ( 4) 00:31:19.911 38947.109 - 39196.770: 99.2575% ( 5) 00:31:19.911 39196.770 - 39446.430: 99.2908% ( 3) 00:31:19.911 44938.971 - 45188.632: 99.3019% ( 1) 00:31:19.911 45188.632 - 45438.293: 99.3573% ( 5) 00:31:19.911 45438.293 - 45687.954: 99.4238% ( 6) 00:31:19.911 45687.954 - 45937.615: 99.4792% ( 5) 00:31:19.911 45937.615 - 46187.276: 99.5346% ( 5) 00:31:19.911 46187.276 - 46436.937: 99.5900% ( 5) 00:31:19.911 46436.937 - 46686.598: 99.6343% ( 4) 00:31:19.911 46686.598 - 46936.259: 99.7008% ( 6) 00:31:19.911 46936.259 - 47185.920: 99.7562% ( 5) 00:31:19.911 47185.920 - 47435.581: 99.8116% ( 5) 00:31:19.911 47435.581 - 47685.242: 99.8670% ( 5) 00:31:19.911 47685.242 - 47934.903: 99.9224% ( 5) 00:31:19.911 47934.903 - 48184.564: 99.9778% ( 5) 00:31:19.911 48184.564 - 48434.225: 100.0000% ( 2) 00:31:19.911 00:31:19.911 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:31:19.911 ============================================================================== 00:31:19.911 Range in us Cumulative IO count 00:31:19.911 10610.590 - 10673.006: 0.0111% ( 1) 00:31:19.911 10673.006 - 10735.421: 0.0665% ( 5) 00:31:19.911 10735.421 - 10797.836: 0.1662% ( 9) 00:31:19.911 10797.836 - 10860.251: 0.3546% ( 17) 00:31:19.911 10860.251 - 10922.667: 0.5762% ( 20) 00:31:19.911 10922.667 - 10985.082: 1.0306% ( 41) 00:31:19.911 10985.082 - 11047.497: 1.5071% ( 43) 00:31:19.911 11047.497 - 11109.912: 2.1055% ( 54) 00:31:19.911 11109.912 - 11172.328: 2.7815% ( 61) 00:31:19.911 11172.328 - 11234.743: 3.5129% ( 66) 00:31:19.911 11234.743 - 11297.158: 4.1999% ( 62) 00:31:19.911 11297.158 - 11359.573: 5.1086% ( 82) 00:31:19.911 11359.573 - 11421.989: 5.9619% ( 77) 00:31:19.911 11421.989 - 11484.404: 6.9481% ( 89) 00:31:19.911 11484.404 - 11546.819: 8.0563% ( 100) 00:31:19.911 11546.819 - 11609.234: 8.8542% ( 72) 00:31:19.911 11609.234 - 11671.650: 9.9291% ( 97) 00:31:19.911 11671.650 - 11734.065: 10.8156% ( 80) 00:31:19.911 11734.065 - 11796.480: 11.7021% ( 80) 00:31:19.911 11796.480 - 11858.895: 12.4889% ( 71) 00:31:19.911 11858.895 - 11921.310: 13.3976% ( 82) 00:31:19.911 11921.310 - 11983.726: 14.2398% ( 76) 00:31:19.911 11983.726 - 12046.141: 15.0377% ( 72) 00:31:19.911 12046.141 - 12108.556: 15.9685% ( 84) 00:31:19.911 12108.556 - 12170.971: 17.2429% ( 115) 00:31:19.911 12170.971 - 12233.387: 18.5062% ( 114) 00:31:19.911 12233.387 - 12295.802: 19.9136% ( 127) 00:31:19.911 12295.802 - 12358.217: 21.6866% ( 160) 00:31:19.911 12358.217 - 12420.632: 23.5151% ( 165) 00:31:19.911 12420.632 - 12483.048: 25.3657% ( 167) 00:31:19.911 12483.048 - 12545.463: 27.3050% ( 175) 00:31:19.911 12545.463 - 12607.878: 29.0669% ( 159) 00:31:19.911 12607.878 - 12670.293: 30.9508% ( 170) 00:31:19.911 12670.293 - 12732.709: 32.6795% ( 156) 00:31:19.911 12732.709 - 12795.124: 34.5301% ( 167) 00:31:19.911 12795.124 - 12857.539: 36.2699% ( 157) 00:31:19.912 12857.539 - 12919.954: 38.3090% ( 184) 00:31:19.912 12919.954 - 12982.370: 40.0044% ( 153) 00:31:19.912 12982.370 - 13044.785: 41.6777% ( 151) 00:31:19.912 13044.785 - 13107.200: 43.4065% ( 156) 00:31:19.912 13107.200 - 13169.615: 45.2349% ( 165) 00:31:19.912 13169.615 - 13232.030: 46.9193% ( 152) 00:31:19.912 13232.030 - 13294.446: 48.5705% ( 149) 00:31:19.912 13294.446 - 13356.861: 50.1551% ( 143) 00:31:19.912 13356.861 - 13419.276: 51.7730% ( 146) 00:31:19.912 13419.276 - 13481.691: 53.1028% ( 120) 00:31:19.912 13481.691 - 13544.107: 54.1002% ( 90) 00:31:19.912 13544.107 - 13606.522: 55.2194% ( 101) 00:31:19.912 13606.522 - 13668.937: 56.3054% ( 98) 00:31:19.912 13668.937 - 13731.352: 57.4246% ( 101) 00:31:19.912 13731.352 - 13793.768: 58.4331% ( 91) 00:31:19.912 13793.768 - 13856.183: 59.5080% ( 97) 00:31:19.912 13856.183 - 13918.598: 60.7934% ( 116) 00:31:19.912 13918.598 - 13981.013: 62.1454% ( 122) 00:31:19.912 13981.013 - 14043.429: 63.2203% ( 97) 00:31:19.912 14043.429 - 14105.844: 64.1844% ( 87) 00:31:19.912 14105.844 - 14168.259: 65.0931% ( 82) 00:31:19.912 14168.259 - 14230.674: 66.0572% ( 87) 00:31:19.912 14230.674 - 14293.090: 66.7332% ( 61) 00:31:19.912 14293.090 - 14355.505: 67.4535% ( 65) 00:31:19.912 14355.505 - 14417.920: 68.4176% ( 87) 00:31:19.912 14417.920 - 14480.335: 69.1489% ( 66) 00:31:19.912 14480.335 - 14542.750: 69.9136% ( 69) 00:31:19.912 14542.750 - 14605.166: 70.7336% ( 74) 00:31:19.912 14605.166 - 14667.581: 71.6977% ( 87) 00:31:19.912 14667.581 - 14729.996: 72.7394% ( 94) 00:31:19.912 14729.996 - 14792.411: 73.7145% ( 88) 00:31:19.912 14792.411 - 14854.827: 74.6011% ( 80) 00:31:19.912 14854.827 - 14917.242: 75.4543% ( 77) 00:31:19.912 14917.242 - 14979.657: 76.2079% ( 68) 00:31:19.912 14979.657 - 15042.072: 77.0168% ( 73) 00:31:19.912 15042.072 - 15104.488: 77.8701% ( 77) 00:31:19.912 15104.488 - 15166.903: 78.6348% ( 69) 00:31:19.912 15166.903 - 15229.318: 79.4105% ( 70) 00:31:19.912 15229.318 - 15291.733: 80.0310% ( 56) 00:31:19.912 15291.733 - 15354.149: 80.6073% ( 52) 00:31:19.912 15354.149 - 15416.564: 81.1835% ( 52) 00:31:19.912 15416.564 - 15478.979: 81.7708% ( 53) 00:31:19.912 15478.979 - 15541.394: 82.3692% ( 54) 00:31:19.912 15541.394 - 15603.810: 83.0120% ( 58) 00:31:19.912 15603.810 - 15666.225: 83.6658% ( 59) 00:31:19.912 15666.225 - 15728.640: 84.2199% ( 50) 00:31:19.912 15728.640 - 15791.055: 84.8404% ( 56) 00:31:19.912 15791.055 - 15853.470: 85.2948% ( 41) 00:31:19.912 15853.470 - 15915.886: 85.7270% ( 39) 00:31:19.912 15915.886 - 15978.301: 86.1924% ( 42) 00:31:19.912 15978.301 - 16103.131: 87.0567% ( 78) 00:31:19.912 16103.131 - 16227.962: 87.8435% ( 71) 00:31:19.912 16227.962 - 16352.792: 88.3754% ( 48) 00:31:19.912 16352.792 - 16477.623: 88.7965% ( 38) 00:31:19.912 16477.623 - 16602.453: 89.0847% ( 26) 00:31:19.912 16602.453 - 16727.284: 89.4060% ( 29) 00:31:19.912 16727.284 - 16852.114: 89.8271% ( 38) 00:31:19.912 16852.114 - 16976.945: 90.2704% ( 40) 00:31:19.912 16976.945 - 17101.775: 90.5807% ( 28) 00:31:19.912 17101.775 - 17226.606: 90.9464% ( 33) 00:31:19.912 17226.606 - 17351.436: 91.2899% ( 31) 00:31:19.912 17351.436 - 17476.267: 91.6999% ( 37) 00:31:19.912 17476.267 - 17601.097: 92.0102% ( 28) 00:31:19.912 17601.097 - 17725.928: 92.3094% ( 27) 00:31:19.912 17725.928 - 17850.758: 92.6086% ( 27) 00:31:19.912 17850.758 - 17975.589: 92.9189% ( 28) 00:31:19.912 17975.589 - 18100.419: 93.1959% ( 25) 00:31:19.912 18100.419 - 18225.250: 93.6392% ( 40) 00:31:19.912 18225.250 - 18350.080: 94.1046% ( 42) 00:31:19.912 18350.080 - 18474.910: 94.6698% ( 51) 00:31:19.912 18474.910 - 18599.741: 94.9246% ( 23) 00:31:19.912 18599.741 - 18724.571: 95.1241% ( 18) 00:31:19.912 18724.571 - 18849.402: 95.2571% ( 12) 00:31:19.912 18849.402 - 18974.232: 95.3790% ( 11) 00:31:19.912 18974.232 - 19099.063: 95.5120% ( 12) 00:31:19.912 19099.063 - 19223.893: 95.5895% ( 7) 00:31:19.912 19223.893 - 19348.724: 95.7004% ( 10) 00:31:19.912 19348.724 - 19473.554: 95.7890% ( 8) 00:31:19.912 19473.554 - 19598.385: 95.8777% ( 8) 00:31:19.912 19598.385 - 19723.215: 96.0993% ( 20) 00:31:19.912 19723.215 - 19848.046: 96.2544% ( 14) 00:31:19.912 19848.046 - 19972.876: 96.3652% ( 10) 00:31:19.912 19972.876 - 20097.707: 96.4982% ( 12) 00:31:19.912 20097.707 - 20222.537: 96.6866% ( 17) 00:31:19.912 20222.537 - 20347.368: 96.9304% ( 22) 00:31:19.912 20347.368 - 20472.198: 97.1299% ( 18) 00:31:19.912 20472.198 - 20597.029: 97.2850% ( 14) 00:31:19.912 20597.029 - 20721.859: 97.4512% ( 15) 00:31:19.912 20721.859 - 20846.690: 97.5953% ( 13) 00:31:19.912 20846.690 - 20971.520: 97.7283% ( 12) 00:31:19.912 20971.520 - 21096.350: 97.8391% ( 10) 00:31:19.912 21096.350 - 21221.181: 97.9610% ( 11) 00:31:19.912 21221.181 - 21346.011: 98.0386% ( 7) 00:31:19.912 21346.011 - 21470.842: 98.0940% ( 5) 00:31:19.912 21470.842 - 21595.672: 98.1605% ( 6) 00:31:19.912 21595.672 - 21720.503: 98.2270% ( 6) 00:31:19.912 21720.503 - 21845.333: 98.2934% ( 6) 00:31:19.912 21845.333 - 21970.164: 98.3267% ( 3) 00:31:19.912 21970.164 - 22094.994: 98.3378% ( 1) 00:31:19.912 22094.994 - 22219.825: 98.3488% ( 1) 00:31:19.912 22219.825 - 22344.655: 98.3599% ( 1) 00:31:19.912 22344.655 - 22469.486: 98.3710% ( 1) 00:31:19.912 22469.486 - 22594.316: 98.3821% ( 1) 00:31:19.912 22594.316 - 22719.147: 98.3932% ( 1) 00:31:19.912 22719.147 - 22843.977: 98.4153% ( 2) 00:31:19.912 22843.977 - 22968.808: 98.4264% ( 1) 00:31:19.912 22968.808 - 23093.638: 98.4486% ( 2) 00:31:19.912 23093.638 - 23218.469: 98.4707% ( 2) 00:31:19.912 23218.469 - 23343.299: 98.5040% ( 3) 00:31:19.912 23343.299 - 23468.130: 98.5262% ( 2) 00:31:19.912 23468.130 - 23592.960: 98.5483% ( 2) 00:31:19.912 23592.960 - 23717.790: 98.5816% ( 3) 00:31:19.912 33454.568 - 33704.229: 98.6702% ( 8) 00:31:19.912 33704.229 - 33953.890: 98.7145% ( 4) 00:31:19.912 33953.890 - 34203.550: 98.7699% ( 5) 00:31:19.912 34203.550 - 34453.211: 98.8254% ( 5) 00:31:19.912 34453.211 - 34702.872: 98.8918% ( 6) 00:31:19.912 34702.872 - 34952.533: 98.9473% ( 5) 00:31:19.912 34952.533 - 35202.194: 99.0137% ( 6) 00:31:19.912 35202.194 - 35451.855: 99.0691% ( 5) 00:31:19.912 35451.855 - 35701.516: 99.1246% ( 5) 00:31:19.912 35701.516 - 35951.177: 99.1800% ( 5) 00:31:19.912 35951.177 - 36200.838: 99.2354% ( 5) 00:31:19.912 36200.838 - 36450.499: 99.2908% ( 5) 00:31:19.912 42192.701 - 42442.362: 99.3351% ( 4) 00:31:19.912 42442.362 - 42692.023: 99.4016% ( 6) 00:31:19.912 42692.023 - 42941.684: 99.4459% ( 4) 00:31:19.912 42941.684 - 43191.345: 99.5013% ( 5) 00:31:19.912 43191.345 - 43441.006: 99.5567% ( 5) 00:31:19.912 43441.006 - 43690.667: 99.6121% ( 5) 00:31:19.912 43690.667 - 43940.328: 99.6676% ( 5) 00:31:19.912 43940.328 - 44189.989: 99.7340% ( 6) 00:31:19.912 44189.989 - 44439.650: 99.7895% ( 5) 00:31:19.912 44439.650 - 44689.310: 99.8449% ( 5) 00:31:19.912 44689.310 - 44938.971: 99.9113% ( 6) 00:31:19.912 44938.971 - 45188.632: 99.9668% ( 5) 00:31:19.912 45188.632 - 45438.293: 100.0000% ( 3) 00:31:19.912 00:31:19.912 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:31:19.912 ============================================================================== 00:31:19.912 Range in us Cumulative IO count 00:31:19.912 10610.590 - 10673.006: 0.0222% ( 2) 00:31:19.912 10673.006 - 10735.421: 0.1219% ( 9) 00:31:19.912 10735.421 - 10797.836: 0.2660% ( 13) 00:31:19.912 10797.836 - 10860.251: 0.3657% ( 9) 00:31:19.912 10860.251 - 10922.667: 0.5319% ( 15) 00:31:19.912 10922.667 - 10985.082: 0.9641% ( 39) 00:31:19.912 10985.082 - 11047.497: 1.3520% ( 35) 00:31:19.913 11047.497 - 11109.912: 1.8395% ( 44) 00:31:19.913 11109.912 - 11172.328: 2.3271% ( 44) 00:31:19.913 11172.328 - 11234.743: 3.0585% ( 66) 00:31:19.913 11234.743 - 11297.158: 3.8231% ( 69) 00:31:19.913 11297.158 - 11359.573: 4.6321% ( 73) 00:31:19.913 11359.573 - 11421.989: 5.5408% ( 82) 00:31:19.913 11421.989 - 11484.404: 6.5381% ( 90) 00:31:19.913 11484.404 - 11546.819: 7.4579% ( 83) 00:31:19.913 11546.819 - 11609.234: 8.4109% ( 86) 00:31:19.913 11609.234 - 11671.650: 9.2420% ( 75) 00:31:19.913 11671.650 - 11734.065: 10.1396% ( 81) 00:31:19.913 11734.065 - 11796.480: 11.0594% ( 83) 00:31:19.913 11796.480 - 11858.895: 11.9459% ( 80) 00:31:19.913 11858.895 - 11921.310: 12.9654% ( 92) 00:31:19.913 11921.310 - 11983.726: 14.0293% ( 96) 00:31:19.913 11983.726 - 12046.141: 15.0266% ( 90) 00:31:19.913 12046.141 - 12108.556: 16.3675% ( 121) 00:31:19.913 12108.556 - 12170.971: 17.7083% ( 121) 00:31:19.913 12170.971 - 12233.387: 19.1157% ( 127) 00:31:19.913 12233.387 - 12295.802: 20.6560% ( 139) 00:31:19.913 12295.802 - 12358.217: 22.3848% ( 156) 00:31:19.913 12358.217 - 12420.632: 23.9362% ( 140) 00:31:19.913 12420.632 - 12483.048: 25.5319% ( 144) 00:31:19.913 12483.048 - 12545.463: 27.3493% ( 164) 00:31:19.913 12545.463 - 12607.878: 29.1113% ( 159) 00:31:19.913 12607.878 - 12670.293: 30.8178% ( 154) 00:31:19.913 12670.293 - 12732.709: 32.3803% ( 141) 00:31:19.913 12732.709 - 12795.124: 34.3418% ( 177) 00:31:19.913 12795.124 - 12857.539: 36.2921% ( 176) 00:31:19.913 12857.539 - 12919.954: 38.0984% ( 163) 00:31:19.913 12919.954 - 12982.370: 39.8271% ( 156) 00:31:19.913 12982.370 - 13044.785: 41.5226% ( 153) 00:31:19.913 13044.785 - 13107.200: 43.4619% ( 175) 00:31:19.913 13107.200 - 13169.615: 45.2903% ( 165) 00:31:19.913 13169.615 - 13232.030: 47.2407% ( 176) 00:31:19.913 13232.030 - 13294.446: 49.0027% ( 159) 00:31:19.913 13294.446 - 13356.861: 50.6649% ( 150) 00:31:19.913 13356.861 - 13419.276: 51.9725% ( 118) 00:31:19.913 13419.276 - 13481.691: 53.1028% ( 102) 00:31:19.913 13481.691 - 13544.107: 54.3218% ( 110) 00:31:19.913 13544.107 - 13606.522: 55.7735% ( 131) 00:31:19.913 13606.522 - 13668.937: 56.9481% ( 106) 00:31:19.913 13668.937 - 13731.352: 58.2336% ( 116) 00:31:19.913 13731.352 - 13793.768: 59.4304% ( 108) 00:31:19.913 13793.768 - 13856.183: 60.5275% ( 99) 00:31:19.913 13856.183 - 13918.598: 61.5581% ( 93) 00:31:19.913 13918.598 - 13981.013: 62.4224% ( 78) 00:31:19.913 13981.013 - 14043.429: 63.2868% ( 78) 00:31:19.913 14043.429 - 14105.844: 64.0847% ( 72) 00:31:19.913 14105.844 - 14168.259: 64.9047% ( 74) 00:31:19.913 14168.259 - 14230.674: 65.7026% ( 72) 00:31:19.913 14230.674 - 14293.090: 66.6445% ( 85) 00:31:19.913 14293.090 - 14355.505: 67.5199% ( 79) 00:31:19.913 14355.505 - 14417.920: 68.2846% ( 69) 00:31:19.913 14417.920 - 14480.335: 69.0270% ( 67) 00:31:19.913 14480.335 - 14542.750: 69.9246% ( 81) 00:31:19.913 14542.750 - 14605.166: 70.8998% ( 88) 00:31:19.913 14605.166 - 14667.581: 71.7642% ( 78) 00:31:19.913 14667.581 - 14729.996: 72.6064% ( 76) 00:31:19.913 14729.996 - 14792.411: 73.4818% ( 79) 00:31:19.913 14792.411 - 14854.827: 74.4570% ( 88) 00:31:19.913 14854.827 - 14917.242: 75.3214% ( 78) 00:31:19.913 14917.242 - 14979.657: 76.1636% ( 76) 00:31:19.913 14979.657 - 15042.072: 76.9393% ( 70) 00:31:19.913 15042.072 - 15104.488: 77.9145% ( 88) 00:31:19.913 15104.488 - 15166.903: 78.6680% ( 68) 00:31:19.913 15166.903 - 15229.318: 79.4215% ( 68) 00:31:19.913 15229.318 - 15291.733: 80.1418% ( 65) 00:31:19.913 15291.733 - 15354.149: 80.9508% ( 73) 00:31:19.913 15354.149 - 15416.564: 81.7154% ( 69) 00:31:19.913 15416.564 - 15478.979: 82.5355% ( 74) 00:31:19.913 15478.979 - 15541.394: 83.2447% ( 64) 00:31:19.913 15541.394 - 15603.810: 83.8985% ( 59) 00:31:19.913 15603.810 - 15666.225: 84.3750% ( 43) 00:31:19.913 15666.225 - 15728.640: 84.8848% ( 46) 00:31:19.913 15728.640 - 15791.055: 85.3834% ( 45) 00:31:19.913 15791.055 - 15853.470: 85.8488% ( 42) 00:31:19.913 15853.470 - 15915.886: 86.2367% ( 35) 00:31:19.913 15915.886 - 15978.301: 86.6356% ( 36) 00:31:19.913 15978.301 - 16103.131: 87.2673% ( 57) 00:31:19.913 16103.131 - 16227.962: 87.7660% ( 45) 00:31:19.913 16227.962 - 16352.792: 88.1649% ( 36) 00:31:19.913 16352.792 - 16477.623: 88.5638% ( 36) 00:31:19.913 16477.623 - 16602.453: 89.0182% ( 41) 00:31:19.913 16602.453 - 16727.284: 89.4171% ( 36) 00:31:19.913 16727.284 - 16852.114: 89.7717% ( 32) 00:31:19.913 16852.114 - 16976.945: 90.0709% ( 27) 00:31:19.913 16976.945 - 17101.775: 90.3369% ( 24) 00:31:19.913 17101.775 - 17226.606: 90.6028% ( 24) 00:31:19.913 17226.606 - 17351.436: 90.8355% ( 21) 00:31:19.913 17351.436 - 17476.267: 91.0683% ( 21) 00:31:19.913 17476.267 - 17601.097: 91.2899% ( 20) 00:31:19.913 17601.097 - 17725.928: 91.4672% ( 16) 00:31:19.913 17725.928 - 17850.758: 91.6999% ( 21) 00:31:19.913 17850.758 - 17975.589: 91.9880% ( 26) 00:31:19.913 17975.589 - 18100.419: 92.2872% ( 27) 00:31:19.913 18100.419 - 18225.250: 92.5975% ( 28) 00:31:19.913 18225.250 - 18350.080: 92.9743% ( 34) 00:31:19.913 18350.080 - 18474.910: 93.3954% ( 38) 00:31:19.913 18474.910 - 18599.741: 93.8276% ( 39) 00:31:19.913 18599.741 - 18724.571: 94.1379% ( 28) 00:31:19.913 18724.571 - 18849.402: 94.4038% ( 24) 00:31:19.913 18849.402 - 18974.232: 94.8360% ( 39) 00:31:19.913 18974.232 - 19099.063: 95.0244% ( 17) 00:31:19.913 19099.063 - 19223.893: 95.3679% ( 31) 00:31:19.913 19223.893 - 19348.724: 95.6117% ( 22) 00:31:19.913 19348.724 - 19473.554: 95.7779% ( 15) 00:31:19.913 19473.554 - 19598.385: 95.9441% ( 15) 00:31:19.913 19598.385 - 19723.215: 96.1325% ( 17) 00:31:19.913 19723.215 - 19848.046: 96.3542% ( 20) 00:31:19.913 19848.046 - 19972.876: 96.5315% ( 16) 00:31:19.913 19972.876 - 20097.707: 96.7199% ( 17) 00:31:19.913 20097.707 - 20222.537: 96.8418% ( 11) 00:31:19.913 20222.537 - 20347.368: 97.0523% ( 19) 00:31:19.913 20347.368 - 20472.198: 97.2185% ( 15) 00:31:19.913 20472.198 - 20597.029: 97.3626% ( 13) 00:31:19.913 20597.029 - 20721.859: 97.5066% ( 13) 00:31:19.913 20721.859 - 20846.690: 97.5953% ( 8) 00:31:19.913 20846.690 - 20971.520: 97.6840% ( 8) 00:31:19.913 20971.520 - 21096.350: 97.7615% ( 7) 00:31:19.913 21096.350 - 21221.181: 97.8391% ( 7) 00:31:19.913 21221.181 - 21346.011: 97.9388% ( 9) 00:31:19.913 21346.011 - 21470.842: 98.0053% ( 6) 00:31:19.913 21470.842 - 21595.672: 98.0829% ( 7) 00:31:19.913 21595.672 - 21720.503: 98.1605% ( 7) 00:31:19.913 21720.503 - 21845.333: 98.2270% ( 6) 00:31:19.913 21845.333 - 21970.164: 98.3156% ( 8) 00:31:19.913 21970.164 - 22094.994: 98.3821% ( 6) 00:31:19.913 22094.994 - 22219.825: 98.4707% ( 8) 00:31:19.913 22219.825 - 22344.655: 98.5151% ( 4) 00:31:19.913 22344.655 - 22469.486: 98.5705% ( 5) 00:31:19.913 22469.486 - 22594.316: 98.5816% ( 1) 00:31:19.913 29834.484 - 29959.314: 98.6037% ( 2) 00:31:19.913 29959.314 - 30084.145: 98.6370% ( 3) 00:31:19.913 30084.145 - 30208.975: 98.6591% ( 2) 00:31:19.913 30208.975 - 30333.806: 98.6813% ( 2) 00:31:19.913 30333.806 - 30458.636: 98.7145% ( 3) 00:31:19.913 30458.636 - 30583.467: 98.7478% ( 3) 00:31:19.913 30583.467 - 30708.297: 98.7699% ( 2) 00:31:19.913 30708.297 - 30833.128: 98.8032% ( 3) 00:31:19.913 30833.128 - 30957.958: 98.8254% ( 2) 00:31:19.913 30957.958 - 31082.789: 98.8586% ( 3) 00:31:19.913 31082.789 - 31207.619: 98.8918% ( 3) 00:31:19.913 31207.619 - 31332.450: 98.9140% ( 2) 00:31:19.913 31332.450 - 31457.280: 98.9473% ( 3) 00:31:19.913 31457.280 - 31582.110: 98.9694% ( 2) 00:31:19.914 31582.110 - 31706.941: 99.0027% ( 3) 00:31:19.914 31706.941 - 31831.771: 99.0248% ( 2) 00:31:19.914 31831.771 - 31956.602: 99.0581% ( 3) 00:31:19.914 31956.602 - 32206.263: 99.1135% ( 5) 00:31:19.914 32206.263 - 32455.924: 99.1689% ( 5) 00:31:19.914 32455.924 - 32705.585: 99.2243% ( 5) 00:31:19.914 32705.585 - 32955.246: 99.2797% ( 5) 00:31:19.914 32955.246 - 33204.907: 99.2908% ( 1) 00:31:19.914 38697.448 - 38947.109: 99.3351% ( 4) 00:31:19.914 38947.109 - 39196.770: 99.3905% ( 5) 00:31:19.914 39196.770 - 39446.430: 99.4459% ( 5) 00:31:19.914 39446.430 - 39696.091: 99.5013% ( 5) 00:31:19.914 39696.091 - 39945.752: 99.5678% ( 6) 00:31:19.914 39945.752 - 40195.413: 99.6232% ( 5) 00:31:19.914 40195.413 - 40445.074: 99.6897% ( 6) 00:31:19.914 40445.074 - 40694.735: 99.7562% ( 6) 00:31:19.914 40694.735 - 40944.396: 99.8227% ( 6) 00:31:19.914 40944.396 - 41194.057: 99.8781% ( 5) 00:31:19.914 41194.057 - 41443.718: 99.9335% ( 5) 00:31:19.914 41443.718 - 41693.379: 100.0000% ( 6) 00:31:19.914 00:31:19.914 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:31:19.914 ============================================================================== 00:31:19.914 Range in us Cumulative IO count 00:31:19.914 10423.345 - 10485.760: 0.0111% ( 1) 00:31:19.914 10485.760 - 10548.175: 0.0222% ( 1) 00:31:19.914 10610.590 - 10673.006: 0.0332% ( 1) 00:31:19.914 10735.421 - 10797.836: 0.1219% ( 8) 00:31:19.914 10797.836 - 10860.251: 0.2881% ( 15) 00:31:19.914 10860.251 - 10922.667: 0.5430% ( 23) 00:31:19.914 10922.667 - 10985.082: 0.8533% ( 28) 00:31:19.914 10985.082 - 11047.497: 1.3741% ( 47) 00:31:19.914 11047.497 - 11109.912: 2.0390% ( 60) 00:31:19.914 11109.912 - 11172.328: 2.7704% ( 66) 00:31:19.914 11172.328 - 11234.743: 3.4574% ( 62) 00:31:19.914 11234.743 - 11297.158: 4.2110% ( 68) 00:31:19.914 11297.158 - 11359.573: 4.8426% ( 57) 00:31:19.914 11359.573 - 11421.989: 5.7735% ( 84) 00:31:19.914 11421.989 - 11484.404: 6.4716% ( 63) 00:31:19.914 11484.404 - 11546.819: 7.2806% ( 73) 00:31:19.914 11546.819 - 11609.234: 8.0563% ( 70) 00:31:19.914 11609.234 - 11671.650: 8.9982% ( 85) 00:31:19.914 11671.650 - 11734.065: 9.9291% ( 84) 00:31:19.914 11734.065 - 11796.480: 10.8156% ( 80) 00:31:19.914 11796.480 - 11858.895: 11.7021% ( 80) 00:31:19.914 11858.895 - 11921.310: 12.5776% ( 79) 00:31:19.914 11921.310 - 11983.726: 13.5527% ( 88) 00:31:19.914 11983.726 - 12046.141: 14.7274% ( 106) 00:31:19.914 12046.141 - 12108.556: 16.0239% ( 117) 00:31:19.914 12108.556 - 12170.971: 17.6862% ( 150) 00:31:19.914 12170.971 - 12233.387: 19.3816% ( 153) 00:31:19.914 12233.387 - 12295.802: 21.1990% ( 164) 00:31:19.914 12295.802 - 12358.217: 22.9277% ( 156) 00:31:19.914 12358.217 - 12420.632: 24.4348% ( 136) 00:31:19.914 12420.632 - 12483.048: 25.9752% ( 139) 00:31:19.914 12483.048 - 12545.463: 27.5044% ( 138) 00:31:19.914 12545.463 - 12607.878: 29.1113% ( 145) 00:31:19.914 12607.878 - 12670.293: 31.0838% ( 178) 00:31:19.914 12670.293 - 12732.709: 32.9455% ( 168) 00:31:19.914 12732.709 - 12795.124: 34.6520% ( 154) 00:31:19.914 12795.124 - 12857.539: 36.7686% ( 191) 00:31:19.914 12857.539 - 12919.954: 38.4752% ( 154) 00:31:19.914 12919.954 - 12982.370: 40.2704% ( 162) 00:31:19.914 12982.370 - 13044.785: 42.0656% ( 162) 00:31:19.914 13044.785 - 13107.200: 43.8941% ( 165) 00:31:19.914 13107.200 - 13169.615: 45.7779% ( 170) 00:31:19.914 13169.615 - 13232.030: 47.4512% ( 151) 00:31:19.914 13232.030 - 13294.446: 49.0470% ( 144) 00:31:19.914 13294.446 - 13356.861: 50.8644% ( 164) 00:31:19.914 13356.861 - 13419.276: 52.3160% ( 131) 00:31:19.914 13419.276 - 13481.691: 53.6569% ( 121) 00:31:19.914 13481.691 - 13544.107: 54.8870% ( 111) 00:31:19.914 13544.107 - 13606.522: 56.2389% ( 122) 00:31:19.914 13606.522 - 13668.937: 57.6463% ( 127) 00:31:19.914 13668.937 - 13731.352: 58.7434% ( 99) 00:31:19.914 13731.352 - 13793.768: 59.8293% ( 98) 00:31:19.914 13793.768 - 13856.183: 60.9375% ( 100) 00:31:19.914 13856.183 - 13918.598: 62.2562% ( 119) 00:31:19.914 13918.598 - 13981.013: 63.2425% ( 89) 00:31:19.914 13981.013 - 14043.429: 64.3506% ( 100) 00:31:19.914 14043.429 - 14105.844: 65.1928% ( 76) 00:31:19.914 14105.844 - 14168.259: 66.0461% ( 77) 00:31:19.914 14168.259 - 14230.674: 66.8772% ( 75) 00:31:19.914 14230.674 - 14293.090: 67.6640% ( 71) 00:31:19.914 14293.090 - 14355.505: 68.4176% ( 68) 00:31:19.914 14355.505 - 14417.920: 69.2043% ( 71) 00:31:19.914 14417.920 - 14480.335: 69.9246% ( 65) 00:31:19.914 14480.335 - 14542.750: 70.6671% ( 67) 00:31:19.914 14542.750 - 14605.166: 71.6312% ( 87) 00:31:19.914 14605.166 - 14667.581: 72.3626% ( 66) 00:31:19.914 14667.581 - 14729.996: 73.1826% ( 74) 00:31:19.914 14729.996 - 14792.411: 73.9362% ( 68) 00:31:19.914 14792.411 - 14854.827: 74.7673% ( 75) 00:31:19.914 14854.827 - 14917.242: 75.6760% ( 82) 00:31:19.914 14917.242 - 14979.657: 76.4738% ( 72) 00:31:19.914 14979.657 - 15042.072: 77.3493% ( 79) 00:31:19.914 15042.072 - 15104.488: 78.0253% ( 61) 00:31:19.914 15104.488 - 15166.903: 78.9229% ( 81) 00:31:19.914 15166.903 - 15229.318: 79.7318% ( 73) 00:31:19.914 15229.318 - 15291.733: 80.4410% ( 64) 00:31:19.914 15291.733 - 15354.149: 81.2057% ( 69) 00:31:19.914 15354.149 - 15416.564: 81.9149% ( 64) 00:31:19.914 15416.564 - 15478.979: 82.5244% ( 55) 00:31:19.914 15478.979 - 15541.394: 83.0341% ( 46) 00:31:19.914 15541.394 - 15603.810: 83.5217% ( 44) 00:31:19.914 15603.810 - 15666.225: 83.9982% ( 43) 00:31:19.914 15666.225 - 15728.640: 84.3972% ( 36) 00:31:19.914 15728.640 - 15791.055: 84.8404% ( 40) 00:31:19.914 15791.055 - 15853.470: 85.2283% ( 35) 00:31:19.914 15853.470 - 15915.886: 85.6272% ( 36) 00:31:19.915 15915.886 - 15978.301: 85.9597% ( 30) 00:31:19.915 15978.301 - 16103.131: 86.6024% ( 58) 00:31:19.915 16103.131 - 16227.962: 87.2451% ( 58) 00:31:19.915 16227.962 - 16352.792: 87.6884% ( 40) 00:31:19.915 16352.792 - 16477.623: 88.0319% ( 31) 00:31:19.915 16477.623 - 16602.453: 88.3533% ( 29) 00:31:19.915 16602.453 - 16727.284: 88.6857% ( 30) 00:31:19.915 16727.284 - 16852.114: 88.9738% ( 26) 00:31:19.915 16852.114 - 16976.945: 89.3395% ( 33) 00:31:19.915 16976.945 - 17101.775: 89.5501% ( 19) 00:31:19.915 17101.775 - 17226.606: 89.7717% ( 20) 00:31:19.915 17226.606 - 17351.436: 90.0155% ( 22) 00:31:19.915 17351.436 - 17476.267: 90.3147% ( 27) 00:31:19.915 17476.267 - 17601.097: 90.6582% ( 31) 00:31:19.915 17601.097 - 17725.928: 90.9353% ( 25) 00:31:19.915 17725.928 - 17850.758: 91.2677% ( 30) 00:31:19.915 17850.758 - 17975.589: 91.6334% ( 33) 00:31:19.915 17975.589 - 18100.419: 92.0324% ( 36) 00:31:19.915 18100.419 - 18225.250: 92.2872% ( 23) 00:31:19.915 18225.250 - 18350.080: 92.5864% ( 27) 00:31:19.915 18350.080 - 18474.910: 92.8191% ( 21) 00:31:19.915 18474.910 - 18599.741: 93.1073% ( 26) 00:31:19.915 18599.741 - 18724.571: 93.4397% ( 30) 00:31:19.915 18724.571 - 18849.402: 93.7832% ( 31) 00:31:19.915 18849.402 - 18974.232: 94.3816% ( 54) 00:31:19.915 18974.232 - 19099.063: 94.9690% ( 53) 00:31:19.915 19099.063 - 19223.893: 95.2571% ( 26) 00:31:19.915 19223.893 - 19348.724: 95.5341% ( 25) 00:31:19.915 19348.724 - 19473.554: 95.7668% ( 21) 00:31:19.915 19473.554 - 19598.385: 96.0328% ( 24) 00:31:19.915 19598.385 - 19723.215: 96.2323% ( 18) 00:31:19.915 19723.215 - 19848.046: 96.4428% ( 19) 00:31:19.915 19848.046 - 19972.876: 96.6201% ( 16) 00:31:19.915 19972.876 - 20097.707: 96.8196% ( 18) 00:31:19.915 20097.707 - 20222.537: 96.9082% ( 8) 00:31:19.915 20222.537 - 20347.368: 97.0191% ( 10) 00:31:19.915 20347.368 - 20472.198: 97.1077% ( 8) 00:31:19.915 20472.198 - 20597.029: 97.2185% ( 10) 00:31:19.915 20597.029 - 20721.859: 97.3072% ( 8) 00:31:19.915 20721.859 - 20846.690: 97.4069% ( 9) 00:31:19.915 20846.690 - 20971.520: 97.5066% ( 9) 00:31:19.915 20971.520 - 21096.350: 97.6064% ( 9) 00:31:19.915 21096.350 - 21221.181: 97.6507% ( 4) 00:31:19.915 21221.181 - 21346.011: 97.7061% ( 5) 00:31:19.915 21346.011 - 21470.842: 97.8059% ( 9) 00:31:19.915 21470.842 - 21595.672: 97.8723% ( 6) 00:31:19.915 21595.672 - 21720.503: 97.9056% ( 3) 00:31:19.915 21720.503 - 21845.333: 97.9721% ( 6) 00:31:19.915 21845.333 - 21970.164: 98.0275% ( 5) 00:31:19.915 21970.164 - 22094.994: 98.1161% ( 8) 00:31:19.915 22094.994 - 22219.825: 98.2048% ( 8) 00:31:19.915 22219.825 - 22344.655: 98.2824% ( 7) 00:31:19.915 22344.655 - 22469.486: 98.3488% ( 6) 00:31:19.915 22469.486 - 22594.316: 98.3710% ( 2) 00:31:19.915 22594.316 - 22719.147: 98.3932% ( 2) 00:31:19.915 22719.147 - 22843.977: 98.4264% ( 3) 00:31:19.915 22843.977 - 22968.808: 98.4597% ( 3) 00:31:19.915 22968.808 - 23093.638: 98.4929% ( 3) 00:31:19.915 23093.638 - 23218.469: 98.5372% ( 4) 00:31:19.915 23218.469 - 23343.299: 98.5705% ( 3) 00:31:19.915 23343.299 - 23468.130: 98.5816% ( 1) 00:31:19.915 26214.400 - 26339.230: 98.6148% ( 3) 00:31:19.915 26339.230 - 26464.061: 98.6370% ( 2) 00:31:19.915 26464.061 - 26588.891: 98.6702% ( 3) 00:31:19.915 26588.891 - 26713.722: 98.6924% ( 2) 00:31:19.915 26713.722 - 26838.552: 98.7256% ( 3) 00:31:19.915 26838.552 - 26963.383: 98.7589% ( 3) 00:31:19.915 26963.383 - 27088.213: 98.7810% ( 2) 00:31:19.915 27088.213 - 27213.044: 98.8143% ( 3) 00:31:19.915 27213.044 - 27337.874: 98.8364% ( 2) 00:31:19.915 27337.874 - 27462.705: 98.8697% ( 3) 00:31:19.915 27462.705 - 27587.535: 98.9029% ( 3) 00:31:19.915 27587.535 - 27712.366: 98.9362% ( 3) 00:31:19.915 27712.366 - 27837.196: 98.9583% ( 2) 00:31:19.915 27837.196 - 27962.027: 98.9916% ( 3) 00:31:19.915 27962.027 - 28086.857: 99.0248% ( 3) 00:31:19.915 28086.857 - 28211.688: 99.0470% ( 2) 00:31:19.915 28211.688 - 28336.518: 99.0802% ( 3) 00:31:19.915 28336.518 - 28461.349: 99.1024% ( 2) 00:31:19.915 28461.349 - 28586.179: 99.1356% ( 3) 00:31:19.915 28586.179 - 28711.010: 99.1578% ( 2) 00:31:19.915 28711.010 - 28835.840: 99.1910% ( 3) 00:31:19.915 28835.840 - 28960.670: 99.2132% ( 2) 00:31:19.915 28960.670 - 29085.501: 99.2465% ( 3) 00:31:19.915 29085.501 - 29210.331: 99.2686% ( 2) 00:31:19.915 29210.331 - 29335.162: 99.2908% ( 2) 00:31:19.915 35202.194 - 35451.855: 99.3573% ( 6) 00:31:19.915 35451.855 - 35701.516: 99.4238% ( 6) 00:31:19.915 35701.516 - 35951.177: 99.4681% ( 4) 00:31:19.915 35951.177 - 36200.838: 99.5235% ( 5) 00:31:19.915 36200.838 - 36450.499: 99.5789% ( 5) 00:31:19.915 36450.499 - 36700.160: 99.6343% ( 5) 00:31:19.915 36700.160 - 36949.821: 99.6897% ( 5) 00:31:19.915 36949.821 - 37199.482: 99.7340% ( 4) 00:31:19.915 37199.482 - 37449.143: 99.8005% ( 6) 00:31:19.915 37449.143 - 37698.804: 99.8781% ( 7) 00:31:19.915 37698.804 - 37948.465: 99.9446% ( 6) 00:31:19.915 37948.465 - 38198.126: 100.0000% ( 5) 00:31:19.915 00:31:19.915 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:31:19.915 ============================================================================== 00:31:19.915 Range in us Cumulative IO count 00:31:19.915 10673.006 - 10735.421: 0.0332% ( 3) 00:31:19.915 10735.421 - 10797.836: 0.1884% ( 14) 00:31:19.915 10797.836 - 10860.251: 0.3879% ( 18) 00:31:19.915 10860.251 - 10922.667: 0.5762% ( 17) 00:31:19.915 10922.667 - 10985.082: 0.8422% ( 24) 00:31:19.915 10985.082 - 11047.497: 1.2633% ( 38) 00:31:19.915 11047.497 - 11109.912: 1.7841% ( 47) 00:31:19.915 11109.912 - 11172.328: 2.3936% ( 55) 00:31:19.915 11172.328 - 11234.743: 2.9920% ( 54) 00:31:19.915 11234.743 - 11297.158: 3.5904% ( 54) 00:31:19.915 11297.158 - 11359.573: 4.2775% ( 62) 00:31:19.915 11359.573 - 11421.989: 4.9535% ( 61) 00:31:19.915 11421.989 - 11484.404: 5.6848% ( 66) 00:31:19.915 11484.404 - 11546.819: 6.7376% ( 95) 00:31:19.915 11546.819 - 11609.234: 7.3914% ( 59) 00:31:19.915 11609.234 - 11671.650: 8.0895% ( 63) 00:31:19.915 11671.650 - 11734.065: 8.8542% ( 69) 00:31:19.915 11734.065 - 11796.480: 9.7185% ( 78) 00:31:19.915 11796.480 - 11858.895: 10.8821% ( 105) 00:31:19.915 11858.895 - 11921.310: 12.1011% ( 110) 00:31:19.915 11921.310 - 11983.726: 13.4419% ( 121) 00:31:19.915 11983.726 - 12046.141: 14.8382% ( 126) 00:31:19.915 12046.141 - 12108.556: 16.2788% ( 130) 00:31:19.915 12108.556 - 12170.971: 18.2292% ( 176) 00:31:19.915 12170.971 - 12233.387: 19.8582% ( 147) 00:31:19.915 12233.387 - 12295.802: 21.3431% ( 134) 00:31:19.915 12295.802 - 12358.217: 23.1161% ( 160) 00:31:19.915 12358.217 - 12420.632: 24.7784% ( 150) 00:31:19.915 12420.632 - 12483.048: 26.5403% ( 159) 00:31:19.915 12483.048 - 12545.463: 27.9699% ( 129) 00:31:19.915 12545.463 - 12607.878: 29.5767% ( 145) 00:31:19.915 12607.878 - 12670.293: 31.2611% ( 152) 00:31:19.915 12670.293 - 12732.709: 33.0341% ( 160) 00:31:19.915 12732.709 - 12795.124: 34.8183% ( 161) 00:31:19.915 12795.124 - 12857.539: 36.6800% ( 168) 00:31:19.915 12857.539 - 12919.954: 38.5195% ( 166) 00:31:19.915 12919.954 - 12982.370: 40.1263% ( 145) 00:31:19.915 12982.370 - 13044.785: 41.6888% ( 141) 00:31:19.915 13044.785 - 13107.200: 43.6724% ( 179) 00:31:19.915 13107.200 - 13169.615: 45.4787% ( 163) 00:31:19.915 13169.615 - 13232.030: 47.2961% ( 164) 00:31:19.915 13232.030 - 13294.446: 48.8143% ( 137) 00:31:19.916 13294.446 - 13356.861: 50.2216% ( 127) 00:31:19.916 13356.861 - 13419.276: 51.5514% ( 120) 00:31:19.916 13419.276 - 13481.691: 52.8701% ( 119) 00:31:19.916 13481.691 - 13544.107: 54.4659% ( 144) 00:31:19.916 13544.107 - 13606.522: 55.8732% ( 127) 00:31:19.916 13606.522 - 13668.937: 57.2363% ( 123) 00:31:19.916 13668.937 - 13731.352: 58.4552% ( 110) 00:31:19.916 13731.352 - 13793.768: 59.7739% ( 119) 00:31:19.916 13793.768 - 13856.183: 61.1148% ( 121) 00:31:19.916 13856.183 - 13918.598: 62.2451% ( 102) 00:31:19.916 13918.598 - 13981.013: 63.2979% ( 95) 00:31:19.916 13981.013 - 14043.429: 64.1622% ( 78) 00:31:19.916 14043.429 - 14105.844: 65.0488% ( 80) 00:31:19.916 14105.844 - 14168.259: 65.9796% ( 84) 00:31:19.916 14168.259 - 14230.674: 66.8329% ( 77) 00:31:19.916 14230.674 - 14293.090: 67.5754% ( 67) 00:31:19.916 14293.090 - 14355.505: 68.3178% ( 67) 00:31:19.916 14355.505 - 14417.920: 69.0160% ( 63) 00:31:19.916 14417.920 - 14480.335: 69.8027% ( 71) 00:31:19.916 14480.335 - 14542.750: 70.6449% ( 76) 00:31:19.916 14542.750 - 14605.166: 71.7199% ( 97) 00:31:19.916 14605.166 - 14667.581: 72.5288% ( 73) 00:31:19.916 14667.581 - 14729.996: 73.4929% ( 87) 00:31:19.916 14729.996 - 14792.411: 74.6232% ( 102) 00:31:19.916 14792.411 - 14854.827: 75.4876% ( 78) 00:31:19.916 14854.827 - 14917.242: 76.2411% ( 68) 00:31:19.916 14917.242 - 14979.657: 76.9171% ( 61) 00:31:19.916 14979.657 - 15042.072: 77.6152% ( 63) 00:31:19.916 15042.072 - 15104.488: 78.2912% ( 61) 00:31:19.916 15104.488 - 15166.903: 79.0559% ( 69) 00:31:19.916 15166.903 - 15229.318: 79.7318% ( 61) 00:31:19.916 15229.318 - 15291.733: 80.3191% ( 53) 00:31:19.916 15291.733 - 15354.149: 80.8400% ( 47) 00:31:19.916 15354.149 - 15416.564: 81.3497% ( 46) 00:31:19.916 15416.564 - 15478.979: 81.9481% ( 54) 00:31:19.916 15478.979 - 15541.394: 82.5687% ( 56) 00:31:19.916 15541.394 - 15603.810: 83.1117% ( 49) 00:31:19.916 15603.810 - 15666.225: 83.7766% ( 60) 00:31:19.916 15666.225 - 15728.640: 84.2863% ( 46) 00:31:19.916 15728.640 - 15791.055: 84.6631% ( 34) 00:31:19.916 15791.055 - 15853.470: 84.9956% ( 30) 00:31:19.916 15853.470 - 15915.886: 85.4277% ( 39) 00:31:19.916 15915.886 - 15978.301: 85.7380% ( 28) 00:31:19.916 15978.301 - 16103.131: 86.5359% ( 72) 00:31:19.916 16103.131 - 16227.962: 87.0457% ( 46) 00:31:19.916 16227.962 - 16352.792: 87.4889% ( 40) 00:31:19.916 16352.792 - 16477.623: 87.9543% ( 42) 00:31:19.916 16477.623 - 16602.453: 88.4087% ( 41) 00:31:19.916 16602.453 - 16727.284: 88.7522% ( 31) 00:31:19.916 16727.284 - 16852.114: 89.0071% ( 23) 00:31:19.916 16852.114 - 16976.945: 89.2398% ( 21) 00:31:19.916 16976.945 - 17101.775: 89.3395% ( 9) 00:31:19.916 17101.775 - 17226.606: 89.5612% ( 20) 00:31:19.916 17226.606 - 17351.436: 89.8604% ( 27) 00:31:19.916 17351.436 - 17476.267: 90.1707% ( 28) 00:31:19.916 17476.267 - 17601.097: 90.4145% ( 22) 00:31:19.916 17601.097 - 17725.928: 90.6139% ( 18) 00:31:19.916 17725.928 - 17850.758: 90.9242% ( 28) 00:31:19.916 17850.758 - 17975.589: 91.4450% ( 47) 00:31:19.916 17975.589 - 18100.419: 91.7221% ( 25) 00:31:19.916 18100.419 - 18225.250: 92.0324% ( 28) 00:31:19.916 18225.250 - 18350.080: 92.3316% ( 27) 00:31:19.916 18350.080 - 18474.910: 92.7859% ( 41) 00:31:19.916 18474.910 - 18599.741: 93.1627% ( 34) 00:31:19.916 18599.741 - 18724.571: 93.5062% ( 31) 00:31:19.916 18724.571 - 18849.402: 93.8054% ( 27) 00:31:19.916 18849.402 - 18974.232: 94.2819% ( 43) 00:31:19.916 18974.232 - 19099.063: 94.6254% ( 31) 00:31:19.916 19099.063 - 19223.893: 94.9468% ( 29) 00:31:19.916 19223.893 - 19348.724: 95.3679% ( 38) 00:31:19.916 19348.724 - 19473.554: 95.6228% ( 23) 00:31:19.916 19473.554 - 19598.385: 95.8444% ( 20) 00:31:19.916 19598.385 - 19723.215: 96.0550% ( 19) 00:31:19.916 19723.215 - 19848.046: 96.2877% ( 21) 00:31:19.916 19848.046 - 19972.876: 96.4539% ( 15) 00:31:19.916 19972.876 - 20097.707: 96.6312% ( 16) 00:31:19.916 20097.707 - 20222.537: 96.7088% ( 7) 00:31:19.916 20222.537 - 20347.368: 96.8196% ( 10) 00:31:19.916 20347.368 - 20472.198: 96.8861% ( 6) 00:31:19.916 20472.198 - 20597.029: 96.9526% ( 6) 00:31:19.916 20597.029 - 20721.859: 97.1631% ( 19) 00:31:19.916 20721.859 - 20846.690: 97.4069% ( 22) 00:31:19.916 20846.690 - 20971.520: 97.5953% ( 17) 00:31:19.916 20971.520 - 21096.350: 97.6950% ( 9) 00:31:19.916 21096.350 - 21221.181: 97.7948% ( 9) 00:31:19.916 21221.181 - 21346.011: 97.8945% ( 9) 00:31:19.916 21346.011 - 21470.842: 97.9832% ( 8) 00:31:19.916 21470.842 - 21595.672: 98.0718% ( 8) 00:31:19.916 21595.672 - 21720.503: 98.1605% ( 8) 00:31:19.916 21720.503 - 21845.333: 98.2270% ( 6) 00:31:19.916 21845.333 - 21970.164: 98.2602% ( 3) 00:31:19.916 21970.164 - 22094.994: 98.2824% ( 2) 00:31:19.916 22094.994 - 22219.825: 98.3156% ( 3) 00:31:19.916 22219.825 - 22344.655: 98.3378% ( 2) 00:31:19.916 22344.655 - 22469.486: 98.3599% ( 2) 00:31:19.916 22469.486 - 22594.316: 98.3932% ( 3) 00:31:19.916 22594.316 - 22719.147: 98.4264% ( 3) 00:31:19.916 22719.147 - 22843.977: 98.5705% ( 13) 00:31:19.916 22843.977 - 22968.808: 98.7145% ( 13) 00:31:19.916 22968.808 - 23093.638: 98.7921% ( 7) 00:31:19.916 23093.638 - 23218.469: 98.8475% ( 5) 00:31:19.916 23218.469 - 23343.299: 98.8697% ( 2) 00:31:19.916 23343.299 - 23468.130: 98.8918% ( 2) 00:31:19.916 23468.130 - 23592.960: 98.9140% ( 2) 00:31:19.916 23592.960 - 23717.790: 98.9362% ( 2) 00:31:19.916 23717.790 - 23842.621: 98.9583% ( 2) 00:31:19.916 23842.621 - 23967.451: 98.9805% ( 2) 00:31:19.916 23967.451 - 24092.282: 99.0027% ( 2) 00:31:19.916 24092.282 - 24217.112: 99.0248% ( 2) 00:31:19.916 24217.112 - 24341.943: 99.0470% ( 2) 00:31:19.916 24341.943 - 24466.773: 99.0691% ( 2) 00:31:19.916 24466.773 - 24591.604: 99.0913% ( 2) 00:31:19.916 24591.604 - 24716.434: 99.1135% ( 2) 00:31:19.916 24716.434 - 24841.265: 99.1356% ( 2) 00:31:19.916 24841.265 - 24966.095: 99.1689% ( 3) 00:31:19.916 24966.095 - 25090.926: 99.1910% ( 2) 00:31:19.916 25090.926 - 25215.756: 99.2243% ( 3) 00:31:19.916 25215.756 - 25340.587: 99.2465% ( 2) 00:31:19.916 25340.587 - 25465.417: 99.2797% ( 3) 00:31:19.916 25465.417 - 25590.248: 99.2908% ( 1) 00:31:19.916 31332.450 - 31457.280: 99.3019% ( 1) 00:31:19.916 31457.280 - 31582.110: 99.3351% ( 3) 00:31:19.916 31582.110 - 31706.941: 99.3684% ( 3) 00:31:19.916 31706.941 - 31831.771: 99.3905% ( 2) 00:31:19.916 31831.771 - 31956.602: 99.4238% ( 3) 00:31:19.916 31956.602 - 32206.263: 99.4902% ( 6) 00:31:19.916 32206.263 - 32455.924: 99.5457% ( 5) 00:31:19.916 32455.924 - 32705.585: 99.6121% ( 6) 00:31:19.916 32705.585 - 32955.246: 99.6565% ( 4) 00:31:19.916 32955.246 - 33204.907: 99.7119% ( 5) 00:31:19.916 33204.907 - 33454.568: 99.7562% ( 4) 00:31:19.916 33454.568 - 33704.229: 99.8116% ( 5) 00:31:19.916 33704.229 - 33953.890: 99.8670% ( 5) 00:31:19.916 33953.890 - 34203.550: 99.9224% ( 5) 00:31:19.916 34203.550 - 34453.211: 99.9889% ( 6) 00:31:19.916 34453.211 - 34702.872: 100.0000% ( 1) 00:31:19.916 00:31:19.916 13:26:12 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:31:19.916 00:31:19.916 real 0m2.991s 00:31:19.916 user 0m2.417s 00:31:19.916 sys 0m0.449s 00:31:19.916 13:26:12 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.916 13:26:12 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:31:19.916 ************************************ 00:31:19.916 END TEST nvme_perf 00:31:19.916 ************************************ 00:31:19.916 13:26:12 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:31:19.916 13:26:12 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:19.916 13:26:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.916 13:26:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:20.175 ************************************ 00:31:20.175 START TEST nvme_hello_world 00:31:20.175 ************************************ 00:31:20.175 13:26:13 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:31:20.435 Initializing NVMe Controllers 00:31:20.435 Attached to 0000:00:10.0 00:31:20.435 Namespace ID: 1 size: 6GB 00:31:20.435 Attached to 0000:00:11.0 00:31:20.435 Namespace ID: 1 size: 5GB 00:31:20.435 Attached to 0000:00:13.0 00:31:20.435 Namespace ID: 1 size: 1GB 00:31:20.435 Attached to 0000:00:12.0 00:31:20.435 Namespace ID: 1 size: 4GB 00:31:20.435 Namespace ID: 2 size: 4GB 00:31:20.435 Namespace ID: 3 size: 4GB 00:31:20.435 Initialization complete. 00:31:20.435 INFO: using host memory buffer for IO 00:31:20.435 Hello world! 00:31:20.435 INFO: using host memory buffer for IO 00:31:20.435 Hello world! 00:31:20.435 INFO: using host memory buffer for IO 00:31:20.435 Hello world! 00:31:20.435 INFO: using host memory buffer for IO 00:31:20.435 Hello world! 00:31:20.435 INFO: using host memory buffer for IO 00:31:20.435 Hello world! 00:31:20.435 INFO: using host memory buffer for IO 00:31:20.435 Hello world! 00:31:20.435 ************************************ 00:31:20.435 END TEST nvme_hello_world 00:31:20.435 ************************************ 00:31:20.435 00:31:20.435 real 0m0.498s 00:31:20.435 user 0m0.225s 00:31:20.435 sys 0m0.221s 00:31:20.435 13:26:13 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.435 13:26:13 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:31:20.694 13:26:13 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:31:20.694 13:26:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:20.694 13:26:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.694 13:26:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:20.694 ************************************ 00:31:20.694 START TEST nvme_sgl 00:31:20.694 ************************************ 00:31:20.694 13:26:13 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:31:20.960 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:31:20.960 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:31:20.960 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:31:20.960 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:31:20.960 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:31:20.960 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:31:20.960 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:31:20.960 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:31:20.960 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:31:20.960 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:31:20.960 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:31:20.960 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:31:20.960 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:31:20.960 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:31:21.239 NVMe Readv/Writev Request test 00:31:21.239 Attached to 0000:00:10.0 00:31:21.239 Attached to 0000:00:11.0 00:31:21.239 Attached to 0000:00:13.0 00:31:21.239 Attached to 0000:00:12.0 00:31:21.240 0000:00:10.0: build_io_request_2 test passed 00:31:21.240 0000:00:10.0: build_io_request_4 test passed 00:31:21.240 0000:00:10.0: build_io_request_5 test passed 00:31:21.240 0000:00:10.0: build_io_request_6 test passed 00:31:21.240 0000:00:10.0: build_io_request_7 test passed 00:31:21.240 0000:00:10.0: build_io_request_10 test passed 00:31:21.240 0000:00:11.0: build_io_request_2 test passed 00:31:21.240 0000:00:11.0: build_io_request_4 test passed 00:31:21.240 0000:00:11.0: build_io_request_5 test passed 00:31:21.240 0000:00:11.0: build_io_request_6 test passed 00:31:21.240 0000:00:11.0: build_io_request_7 test passed 00:31:21.240 0000:00:11.0: build_io_request_10 test passed 00:31:21.240 Cleaning up... 00:31:21.240 ************************************ 00:31:21.240 END TEST nvme_sgl 00:31:21.240 ************************************ 00:31:21.240 00:31:21.240 real 0m0.523s 00:31:21.240 user 0m0.252s 00:31:21.240 sys 0m0.214s 00:31:21.240 13:26:14 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:21.240 13:26:14 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:31:21.240 13:26:14 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:31:21.240 13:26:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:21.240 13:26:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:21.240 13:26:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:21.240 ************************************ 00:31:21.240 START TEST nvme_e2edp 00:31:21.240 ************************************ 00:31:21.240 13:26:14 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:31:21.499 NVMe Write/Read with End-to-End data protection test 00:31:21.499 Attached to 0000:00:10.0 00:31:21.499 Attached to 0000:00:11.0 00:31:21.499 Attached to 0000:00:13.0 00:31:21.499 Attached to 0000:00:12.0 00:31:21.499 Cleaning up... 00:31:21.499 00:31:21.499 real 0m0.410s 00:31:21.499 user 0m0.154s 00:31:21.499 sys 0m0.205s 00:31:21.499 ************************************ 00:31:21.499 END TEST nvme_e2edp 00:31:21.499 ************************************ 00:31:21.499 13:26:14 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:21.499 13:26:14 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:31:21.757 13:26:14 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:31:21.757 13:26:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:21.757 13:26:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:21.757 13:26:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:21.757 ************************************ 00:31:21.757 START TEST nvme_reserve 00:31:21.757 ************************************ 00:31:21.757 13:26:14 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:31:22.016 ===================================================== 00:31:22.016 NVMe Controller at PCI bus 0, device 16, function 0 00:31:22.016 ===================================================== 00:31:22.016 Reservations: Not Supported 00:31:22.016 ===================================================== 00:31:22.016 NVMe Controller at PCI bus 0, device 17, function 0 00:31:22.016 ===================================================== 00:31:22.016 Reservations: Not Supported 00:31:22.016 ===================================================== 00:31:22.016 NVMe Controller at PCI bus 0, device 19, function 0 00:31:22.016 ===================================================== 00:31:22.016 Reservations: Not Supported 00:31:22.016 ===================================================== 00:31:22.016 NVMe Controller at PCI bus 0, device 18, function 0 00:31:22.016 ===================================================== 00:31:22.016 Reservations: Not Supported 00:31:22.016 Reservation test passed 00:31:22.016 00:31:22.016 real 0m0.425s 00:31:22.016 user 0m0.172s 00:31:22.016 sys 0m0.202s 00:31:22.016 13:26:15 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:22.016 ************************************ 00:31:22.016 END TEST nvme_reserve 00:31:22.016 ************************************ 00:31:22.016 13:26:15 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:31:22.275 13:26:15 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:31:22.275 13:26:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:22.275 13:26:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:22.275 13:26:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:22.275 ************************************ 00:31:22.275 START TEST nvme_err_injection 00:31:22.275 ************************************ 00:31:22.275 13:26:15 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:31:22.533 NVMe Error Injection test 00:31:22.533 Attached to 0000:00:10.0 00:31:22.533 Attached to 0000:00:11.0 00:31:22.533 Attached to 0000:00:13.0 00:31:22.533 Attached to 0000:00:12.0 00:31:22.533 0000:00:11.0: get features failed as expected 00:31:22.533 0000:00:13.0: get features failed as expected 00:31:22.533 0000:00:12.0: get features failed as expected 00:31:22.533 0000:00:10.0: get features failed as expected 00:31:22.533 0000:00:10.0: get features successfully as expected 00:31:22.533 0000:00:11.0: get features successfully as expected 00:31:22.533 0000:00:13.0: get features successfully as expected 00:31:22.533 0000:00:12.0: get features successfully as expected 00:31:22.533 0000:00:10.0: read failed as expected 00:31:22.533 0000:00:11.0: read failed as expected 00:31:22.533 0000:00:13.0: read failed as expected 00:31:22.533 0000:00:12.0: read failed as expected 00:31:22.533 0000:00:10.0: read successfully as expected 00:31:22.533 0000:00:11.0: read successfully as expected 00:31:22.533 0000:00:13.0: read successfully as expected 00:31:22.533 0000:00:12.0: read successfully as expected 00:31:22.533 Cleaning up... 00:31:22.533 ************************************ 00:31:22.533 END TEST nvme_err_injection 00:31:22.534 ************************************ 00:31:22.534 00:31:22.534 real 0m0.405s 00:31:22.534 user 0m0.149s 00:31:22.534 sys 0m0.209s 00:31:22.534 13:26:15 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:22.534 13:26:15 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:31:22.534 13:26:15 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:31:22.534 13:26:15 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:31:22.534 13:26:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:22.534 13:26:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:22.534 ************************************ 00:31:22.534 START TEST nvme_overhead 00:31:22.534 ************************************ 00:31:22.534 13:26:15 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:31:23.907 Initializing NVMe Controllers 00:31:23.908 Attached to 0000:00:10.0 00:31:23.908 Attached to 0000:00:11.0 00:31:23.908 Attached to 0000:00:13.0 00:31:23.908 Attached to 0000:00:12.0 00:31:23.908 Initialization complete. Launching workers. 00:31:23.908 submit (in ns) avg, min, max = 18459.2, 12925.7, 104721.0 00:31:23.908 complete (in ns) avg, min, max = 12287.4, 7771.4, 130699.0 00:31:23.908 00:31:23.908 Submit histogram 00:31:23.908 ================ 00:31:23.908 Range in us Cumulative Count 00:31:23.908 12.922 - 12.983: 0.0470% ( 4) 00:31:23.908 12.983 - 13.044: 0.0940% ( 4) 00:31:23.908 13.044 - 13.105: 0.2702% ( 15) 00:31:23.908 13.105 - 13.166: 0.5287% ( 22) 00:31:23.908 13.166 - 13.227: 0.8106% ( 24) 00:31:23.908 13.227 - 13.288: 0.9751% ( 14) 00:31:23.908 13.288 - 13.349: 1.2570% ( 24) 00:31:23.908 13.349 - 13.410: 1.5273% ( 23) 00:31:23.908 13.410 - 13.470: 1.6212% ( 8) 00:31:23.908 13.470 - 13.531: 1.8092% ( 16) 00:31:23.908 13.531 - 13.592: 1.9267% ( 10) 00:31:23.908 13.592 - 13.653: 2.0912% ( 14) 00:31:23.908 13.653 - 13.714: 2.2909% ( 17) 00:31:23.908 13.714 - 13.775: 2.4671% ( 15) 00:31:23.908 13.775 - 13.836: 2.6903% ( 19) 00:31:23.908 13.836 - 13.897: 3.1837% ( 42) 00:31:23.908 13.897 - 13.958: 3.5832% ( 34) 00:31:23.908 13.958 - 14.019: 4.1823% ( 51) 00:31:23.908 14.019 - 14.080: 4.9342% ( 64) 00:31:23.908 14.080 - 14.141: 5.7566% ( 70) 00:31:23.908 14.141 - 14.202: 6.7787% ( 87) 00:31:23.908 14.202 - 14.263: 7.9417% ( 99) 00:31:23.908 14.263 - 14.324: 9.1635% ( 104) 00:31:23.908 14.324 - 14.385: 10.2796% ( 95) 00:31:23.908 14.385 - 14.446: 11.0785% ( 68) 00:31:23.908 14.446 - 14.507: 11.9478% ( 74) 00:31:23.908 14.507 - 14.568: 12.6645% ( 61) 00:31:23.908 14.568 - 14.629: 13.4986% ( 71) 00:31:23.908 14.629 - 14.690: 14.3445% ( 72) 00:31:23.908 14.690 - 14.750: 15.1316% ( 67) 00:31:23.908 14.750 - 14.811: 16.1537% ( 87) 00:31:23.908 14.811 - 14.872: 17.0700% ( 78) 00:31:23.908 14.872 - 14.933: 18.0921% ( 87) 00:31:23.908 14.933 - 14.994: 19.0789% ( 84) 00:31:23.908 14.994 - 15.055: 19.9248% ( 72) 00:31:23.908 15.055 - 15.116: 20.8177% ( 76) 00:31:23.908 15.116 - 15.177: 21.6635% ( 72) 00:31:23.908 15.177 - 15.238: 22.3802% ( 61) 00:31:23.908 15.238 - 15.299: 22.9441% ( 48) 00:31:23.908 15.299 - 15.360: 23.7195% ( 66) 00:31:23.908 15.360 - 15.421: 24.5066% ( 67) 00:31:23.908 15.421 - 15.482: 25.2585% ( 64) 00:31:23.908 15.482 - 15.543: 26.1748% ( 78) 00:31:23.908 15.543 - 15.604: 27.0677% ( 76) 00:31:23.908 15.604 - 15.726: 28.2072% ( 97) 00:31:23.908 15.726 - 15.848: 29.3586% ( 98) 00:31:23.908 15.848 - 15.970: 30.2632% ( 77) 00:31:23.908 15.970 - 16.091: 31.1560% ( 76) 00:31:23.908 16.091 - 16.213: 32.1898% ( 88) 00:31:23.908 16.213 - 16.335: 33.0122% ( 70) 00:31:23.908 16.335 - 16.457: 33.8698% ( 73) 00:31:23.908 16.457 - 16.579: 34.6100% ( 63) 00:31:23.908 16.579 - 16.701: 35.1621% ( 47) 00:31:23.908 16.701 - 16.823: 35.5616% ( 34) 00:31:23.908 16.823 - 16.945: 36.5249% ( 82) 00:31:23.908 16.945 - 17.067: 37.3473% ( 70) 00:31:23.908 17.067 - 17.189: 38.2754% ( 79) 00:31:23.908 17.189 - 17.310: 39.0038% ( 62) 00:31:23.908 17.310 - 17.432: 39.6382% ( 54) 00:31:23.908 17.432 - 17.554: 40.0258% ( 33) 00:31:23.908 17.554 - 17.676: 40.3665% ( 29) 00:31:23.908 17.676 - 17.798: 40.6015% ( 20) 00:31:23.908 17.798 - 17.920: 40.7425% ( 12) 00:31:23.908 17.920 - 18.042: 40.8365% ( 8) 00:31:23.908 18.042 - 18.164: 40.9070% ( 6) 00:31:23.908 18.164 - 18.286: 40.9539% ( 4) 00:31:23.908 18.286 - 18.408: 41.0009% ( 4) 00:31:23.908 18.408 - 18.530: 41.0362% ( 3) 00:31:23.908 18.530 - 18.651: 41.1184% ( 7) 00:31:23.908 18.651 - 18.773: 41.2711% ( 13) 00:31:23.908 18.773 - 18.895: 41.8820% ( 52) 00:31:23.908 18.895 - 19.017: 43.7383% ( 158) 00:31:23.908 19.017 - 19.139: 46.9690% ( 275) 00:31:23.908 19.139 - 19.261: 53.3012% ( 539) 00:31:23.908 19.261 - 19.383: 62.3473% ( 770) 00:31:23.908 19.383 - 19.505: 70.5945% ( 702) 00:31:23.908 19.505 - 19.627: 77.0089% ( 546) 00:31:23.908 19.627 - 19.749: 81.0855% ( 347) 00:31:23.908 19.749 - 19.870: 83.7758% ( 229) 00:31:23.908 19.870 - 19.992: 85.6438% ( 159) 00:31:23.908 19.992 - 20.114: 86.8891% ( 106) 00:31:23.908 20.114 - 20.236: 87.9229% ( 88) 00:31:23.908 20.236 - 20.358: 88.7218% ( 68) 00:31:23.908 20.358 - 20.480: 89.4619% ( 63) 00:31:23.908 20.480 - 20.602: 89.9084% ( 38) 00:31:23.908 20.602 - 20.724: 90.2491% ( 29) 00:31:23.908 20.724 - 20.846: 90.5075% ( 22) 00:31:23.908 20.846 - 20.968: 90.6837% ( 15) 00:31:23.908 20.968 - 21.090: 90.8247% ( 12) 00:31:23.908 21.090 - 21.211: 90.9539% ( 11) 00:31:23.908 21.211 - 21.333: 91.0479% ( 8) 00:31:23.908 21.333 - 21.455: 91.0949% ( 4) 00:31:23.908 21.455 - 21.577: 91.1419% ( 4) 00:31:23.908 21.577 - 21.699: 91.2124% ( 6) 00:31:23.908 21.699 - 21.821: 91.3064% ( 8) 00:31:23.908 21.821 - 21.943: 91.3651% ( 5) 00:31:23.908 21.943 - 22.065: 91.6118% ( 21) 00:31:23.908 22.065 - 22.187: 91.8468% ( 20) 00:31:23.908 22.187 - 22.309: 91.9643% ( 10) 00:31:23.908 22.309 - 22.430: 92.2110% ( 21) 00:31:23.908 22.430 - 22.552: 92.2932% ( 7) 00:31:23.908 22.552 - 22.674: 92.3990% ( 9) 00:31:23.908 22.674 - 22.796: 92.5164% ( 10) 00:31:23.908 22.796 - 22.918: 92.6927% ( 15) 00:31:23.908 22.918 - 23.040: 92.8806% ( 16) 00:31:23.908 23.040 - 23.162: 93.0686% ( 16) 00:31:23.908 23.162 - 23.284: 93.1391% ( 6) 00:31:23.908 23.284 - 23.406: 93.3153% ( 15) 00:31:23.908 23.406 - 23.528: 93.4680% ( 13) 00:31:23.908 23.528 - 23.650: 93.6795% ( 18) 00:31:23.908 23.650 - 23.771: 93.9145% ( 20) 00:31:23.908 23.771 - 23.893: 94.0320% ( 10) 00:31:23.908 23.893 - 24.015: 94.1729% ( 12) 00:31:23.908 24.015 - 24.137: 94.2434% ( 6) 00:31:23.908 24.137 - 24.259: 94.3022% ( 5) 00:31:23.908 24.259 - 24.381: 94.3374% ( 3) 00:31:23.908 24.381 - 24.503: 94.4079% ( 6) 00:31:23.908 24.503 - 24.625: 94.4431% ( 3) 00:31:23.908 24.625 - 24.747: 94.5019% ( 5) 00:31:23.908 24.747 - 24.869: 94.5489% ( 4) 00:31:23.908 24.869 - 24.990: 94.6311% ( 7) 00:31:23.908 24.990 - 25.112: 94.7016% ( 6) 00:31:23.908 25.112 - 25.234: 94.8073% ( 9) 00:31:23.908 25.234 - 25.356: 94.9131% ( 9) 00:31:23.908 25.356 - 25.478: 95.0070% ( 8) 00:31:23.908 25.478 - 25.600: 95.1598% ( 13) 00:31:23.908 25.600 - 25.722: 95.3008% ( 12) 00:31:23.908 25.722 - 25.844: 95.5005% ( 17) 00:31:23.908 25.844 - 25.966: 95.6532% ( 13) 00:31:23.908 25.966 - 26.088: 95.7824% ( 11) 00:31:23.908 26.088 - 26.210: 95.8764% ( 8) 00:31:23.909 26.210 - 26.331: 95.9469% ( 6) 00:31:23.909 26.331 - 26.453: 96.0291% ( 7) 00:31:23.909 26.453 - 26.575: 96.1584% ( 11) 00:31:23.909 26.575 - 26.697: 96.2641% ( 9) 00:31:23.909 26.697 - 26.819: 96.3463% ( 7) 00:31:23.909 26.819 - 26.941: 96.4403% ( 8) 00:31:23.909 26.941 - 27.063: 96.5461% ( 9) 00:31:23.909 27.063 - 27.185: 96.6753% ( 11) 00:31:23.909 27.185 - 27.307: 96.7810% ( 9) 00:31:23.909 27.307 - 27.429: 96.8633% ( 7) 00:31:23.909 27.429 - 27.550: 96.9690% ( 9) 00:31:23.909 27.550 - 27.672: 97.0630% ( 8) 00:31:23.909 27.672 - 27.794: 97.1452% ( 7) 00:31:23.909 27.794 - 27.916: 97.2627% ( 10) 00:31:23.909 27.916 - 28.038: 97.3919% ( 11) 00:31:23.909 28.038 - 28.160: 97.4389% ( 4) 00:31:23.909 28.160 - 28.282: 97.6034% ( 14) 00:31:23.909 28.282 - 28.404: 97.7326% ( 11) 00:31:23.909 28.404 - 28.526: 97.8383% ( 9) 00:31:23.909 28.526 - 28.648: 97.9088% ( 6) 00:31:23.909 28.648 - 28.770: 97.9911% ( 7) 00:31:23.909 28.770 - 28.891: 98.1086% ( 10) 00:31:23.909 28.891 - 29.013: 98.1908% ( 7) 00:31:23.909 29.013 - 29.135: 98.2848% ( 8) 00:31:23.909 29.135 - 29.257: 98.3083% ( 2) 00:31:23.909 29.257 - 29.379: 98.3553% ( 4) 00:31:23.909 29.379 - 29.501: 98.3788% ( 2) 00:31:23.909 29.501 - 29.623: 98.4023% ( 2) 00:31:23.909 29.623 - 29.745: 98.4727% ( 6) 00:31:23.909 29.745 - 29.867: 98.4962% ( 2) 00:31:23.909 29.867 - 29.989: 98.5432% ( 4) 00:31:23.909 29.989 - 30.110: 98.5902% ( 4) 00:31:23.909 30.110 - 30.232: 98.6372% ( 4) 00:31:23.909 30.232 - 30.354: 98.7077% ( 6) 00:31:23.909 30.354 - 30.476: 98.7430% ( 3) 00:31:23.909 30.598 - 30.720: 98.8252% ( 7) 00:31:23.909 30.720 - 30.842: 98.8369% ( 1) 00:31:23.909 30.842 - 30.964: 98.8957% ( 5) 00:31:23.909 30.964 - 31.086: 98.9309% ( 3) 00:31:23.909 31.086 - 31.208: 98.9662% ( 3) 00:31:23.909 31.208 - 31.451: 98.9897% ( 2) 00:31:23.909 31.451 - 31.695: 99.0249% ( 3) 00:31:23.909 31.695 - 31.939: 99.1189% ( 8) 00:31:23.909 31.939 - 32.183: 99.1424% ( 2) 00:31:23.909 32.183 - 32.427: 99.2011% ( 5) 00:31:23.909 32.427 - 32.670: 99.2716% ( 6) 00:31:23.909 32.670 - 32.914: 99.3186% ( 4) 00:31:23.909 32.914 - 33.158: 99.3421% ( 2) 00:31:23.909 33.158 - 33.402: 99.3773% ( 3) 00:31:23.909 33.402 - 33.646: 99.4361% ( 5) 00:31:23.909 33.646 - 33.890: 99.4831% ( 4) 00:31:23.909 33.890 - 34.133: 99.5066% ( 2) 00:31:23.909 34.133 - 34.377: 99.5183% ( 1) 00:31:23.909 34.377 - 34.621: 99.5418% ( 2) 00:31:23.909 34.621 - 34.865: 99.5653% ( 2) 00:31:23.909 35.840 - 36.084: 99.5888% ( 2) 00:31:23.909 36.084 - 36.328: 99.6123% ( 2) 00:31:23.909 36.328 - 36.571: 99.6358% ( 2) 00:31:23.909 36.571 - 36.815: 99.6711% ( 3) 00:31:23.909 36.815 - 37.059: 99.6828% ( 1) 00:31:23.909 37.059 - 37.303: 99.6945% ( 1) 00:31:23.909 37.790 - 38.034: 99.7063% ( 1) 00:31:23.909 38.034 - 38.278: 99.7180% ( 1) 00:31:23.909 38.278 - 38.522: 99.7298% ( 1) 00:31:23.909 39.010 - 39.253: 99.7415% ( 1) 00:31:23.909 39.253 - 39.497: 99.7533% ( 1) 00:31:23.909 39.741 - 39.985: 99.7650% ( 1) 00:31:23.909 41.448 - 41.691: 99.7768% ( 1) 00:31:23.909 41.935 - 42.179: 99.7885% ( 1) 00:31:23.909 42.179 - 42.423: 99.8003% ( 1) 00:31:23.909 44.130 - 44.373: 99.8120% ( 1) 00:31:23.909 45.836 - 46.080: 99.8355% ( 2) 00:31:23.909 47.055 - 47.299: 99.8473% ( 1) 00:31:23.909 49.006 - 49.250: 99.8590% ( 1) 00:31:23.909 49.250 - 49.493: 99.8708% ( 1) 00:31:23.909 50.956 - 51.200: 99.8825% ( 1) 00:31:23.909 52.663 - 52.907: 99.8943% ( 1) 00:31:23.909 55.345 - 55.589: 99.9060% ( 1) 00:31:23.909 62.171 - 62.415: 99.9178% ( 1) 00:31:23.909 64.853 - 65.341: 99.9295% ( 1) 00:31:23.909 71.192 - 71.680: 99.9413% ( 1) 00:31:23.909 90.697 - 91.185: 99.9530% ( 1) 00:31:23.909 92.160 - 92.648: 99.9648% ( 1) 00:31:23.909 93.623 - 94.110: 99.9765% ( 1) 00:31:23.909 97.036 - 97.524: 99.9883% ( 1) 00:31:23.909 104.350 - 104.838: 100.0000% ( 1) 00:31:23.909 00:31:23.909 Complete histogram 00:31:23.909 ================== 00:31:23.909 Range in us Cumulative Count 00:31:23.909 7.771 - 7.802: 0.0117% ( 1) 00:31:23.909 7.802 - 7.863: 0.1527% ( 12) 00:31:23.909 7.863 - 7.924: 0.5992% ( 38) 00:31:23.909 7.924 - 7.985: 0.9868% ( 33) 00:31:23.909 7.985 - 8.046: 1.3980% ( 35) 00:31:23.909 8.046 - 8.107: 1.7387% ( 29) 00:31:23.909 8.107 - 8.168: 2.0207% ( 24) 00:31:23.909 8.168 - 8.229: 2.1382% ( 10) 00:31:23.909 8.229 - 8.290: 2.3144% ( 15) 00:31:23.909 8.290 - 8.350: 2.4906% ( 15) 00:31:23.909 8.350 - 8.411: 2.8783% ( 33) 00:31:23.909 8.411 - 8.472: 3.1720% ( 25) 00:31:23.909 8.472 - 8.533: 3.5127% ( 29) 00:31:23.909 8.533 - 8.594: 4.7227% ( 103) 00:31:23.909 8.594 - 8.655: 6.4967% ( 151) 00:31:23.909 8.655 - 8.716: 7.5305% ( 88) 00:31:23.909 8.716 - 8.777: 8.2472% ( 61) 00:31:23.909 8.777 - 8.838: 8.8816% ( 54) 00:31:23.909 8.838 - 8.899: 9.5277% ( 55) 00:31:23.909 8.899 - 8.960: 10.0329% ( 43) 00:31:23.909 8.960 - 9.021: 10.3148% ( 24) 00:31:23.909 9.021 - 9.082: 10.7260% ( 35) 00:31:23.909 9.082 - 9.143: 12.0418% ( 112) 00:31:23.909 9.143 - 9.204: 13.5808% ( 131) 00:31:23.909 9.204 - 9.265: 14.7204% ( 97) 00:31:23.909 9.265 - 9.326: 15.5310% ( 69) 00:31:23.909 9.326 - 9.387: 16.1184% ( 50) 00:31:23.909 9.387 - 9.448: 17.0935% ( 83) 00:31:23.909 9.448 - 9.509: 18.0804% ( 84) 00:31:23.909 9.509 - 9.570: 18.8792% ( 68) 00:31:23.909 9.570 - 9.630: 19.5019% ( 53) 00:31:23.909 9.630 - 9.691: 20.0540% ( 47) 00:31:23.909 9.691 - 9.752: 20.6180% ( 48) 00:31:23.909 9.752 - 9.813: 21.1584% ( 46) 00:31:23.909 9.813 - 9.874: 21.6165% ( 39) 00:31:23.909 9.874 - 9.935: 22.1922% ( 49) 00:31:23.909 9.935 - 9.996: 22.8383% ( 55) 00:31:23.909 9.996 - 10.057: 23.7195% ( 75) 00:31:23.909 10.057 - 10.118: 24.5888% ( 74) 00:31:23.909 10.118 - 10.179: 25.3759% ( 67) 00:31:23.909 10.179 - 10.240: 25.8576% ( 41) 00:31:23.909 10.240 - 10.301: 26.1513% ( 25) 00:31:23.909 10.301 - 10.362: 26.5508% ( 34) 00:31:23.909 10.362 - 10.423: 26.8797% ( 28) 00:31:23.909 10.423 - 10.484: 27.1969% ( 27) 00:31:23.909 10.484 - 10.545: 27.3849% ( 16) 00:31:23.909 10.545 - 10.606: 27.4671% ( 7) 00:31:23.909 10.606 - 10.667: 27.6198% ( 13) 00:31:23.909 10.667 - 10.728: 27.7726% ( 13) 00:31:23.909 10.728 - 10.789: 27.9370% ( 14) 00:31:23.909 10.789 - 10.850: 28.0310% ( 8) 00:31:23.909 10.850 - 10.910: 28.0898% ( 5) 00:31:23.909 10.910 - 10.971: 28.3482% ( 22) 00:31:23.909 10.971 - 11.032: 28.6654% ( 27) 00:31:23.909 11.032 - 11.093: 28.9826% ( 27) 00:31:23.909 11.093 - 11.154: 29.2411% ( 22) 00:31:23.909 11.154 - 11.215: 29.5818% ( 29) 00:31:23.909 11.215 - 11.276: 29.8402% ( 22) 00:31:23.909 11.276 - 11.337: 30.0987% ( 22) 00:31:23.909 11.337 - 11.398: 30.2632% ( 14) 00:31:23.910 11.398 - 11.459: 30.4746% ( 18) 00:31:23.910 11.459 - 11.520: 30.6978% ( 19) 00:31:23.910 11.520 - 11.581: 30.9445% ( 21) 00:31:23.910 11.581 - 11.642: 31.2148% ( 23) 00:31:23.910 11.642 - 11.703: 31.4615% ( 21) 00:31:23.910 11.703 - 11.764: 31.6142% ( 13) 00:31:23.910 11.764 - 11.825: 31.7552% ( 12) 00:31:23.910 11.825 - 11.886: 31.8844% ( 11) 00:31:23.910 11.886 - 11.947: 32.1076% ( 19) 00:31:23.910 11.947 - 12.008: 32.4836% ( 32) 00:31:23.910 12.008 - 12.069: 32.8830% ( 34) 00:31:23.910 12.069 - 12.130: 33.4469% ( 48) 00:31:23.910 12.130 - 12.190: 33.9051% ( 39) 00:31:23.910 12.190 - 12.251: 34.2458% ( 29) 00:31:23.910 12.251 - 12.312: 34.5277% ( 24) 00:31:23.910 12.312 - 12.373: 34.9742% ( 38) 00:31:23.910 12.373 - 12.434: 39.4267% ( 379) 00:31:23.910 12.434 - 12.495: 49.3539% ( 845) 00:31:23.910 12.495 - 12.556: 57.6480% ( 706) 00:31:23.910 12.556 - 12.617: 62.8642% ( 444) 00:31:23.910 12.617 - 12.678: 65.9539% ( 263) 00:31:23.910 12.678 - 12.739: 68.1626% ( 188) 00:31:23.910 12.739 - 12.800: 70.0305% ( 159) 00:31:23.910 12.800 - 12.861: 71.3933% ( 116) 00:31:23.910 12.861 - 12.922: 72.5094% ( 95) 00:31:23.910 12.922 - 12.983: 73.4727% ( 82) 00:31:23.910 12.983 - 13.044: 74.0836% ( 52) 00:31:23.910 13.044 - 13.105: 75.1410% ( 90) 00:31:23.910 13.105 - 13.166: 76.6212% ( 126) 00:31:23.910 13.166 - 13.227: 78.7946% ( 185) 00:31:23.910 13.227 - 13.288: 81.0855% ( 195) 00:31:23.910 13.288 - 13.349: 82.7068% ( 138) 00:31:23.910 13.349 - 13.410: 84.2928% ( 135) 00:31:23.910 13.410 - 13.470: 85.4793% ( 101) 00:31:23.910 13.470 - 13.531: 86.5249% ( 89) 00:31:23.910 13.531 - 13.592: 87.5940% ( 91) 00:31:23.910 13.592 - 13.653: 88.2754% ( 58) 00:31:23.910 13.653 - 13.714: 88.7453% ( 40) 00:31:23.910 13.714 - 13.775: 89.2387% ( 42) 00:31:23.910 13.775 - 13.836: 89.6499% ( 35) 00:31:23.910 13.836 - 13.897: 90.1668% ( 44) 00:31:23.910 13.897 - 13.958: 90.5075% ( 29) 00:31:23.910 13.958 - 14.019: 90.9070% ( 34) 00:31:23.910 14.019 - 14.080: 91.1772% ( 23) 00:31:23.910 14.080 - 14.141: 91.4709% ( 25) 00:31:23.910 14.141 - 14.202: 91.6706% ( 17) 00:31:23.910 14.202 - 14.263: 91.8938% ( 19) 00:31:23.910 14.263 - 14.324: 91.9995% ( 9) 00:31:23.910 14.324 - 14.385: 92.1523% ( 13) 00:31:23.910 14.385 - 14.446: 92.2227% ( 6) 00:31:23.910 14.446 - 14.507: 92.2932% ( 6) 00:31:23.910 14.507 - 14.568: 92.4107% ( 10) 00:31:23.910 14.568 - 14.629: 92.4930% ( 7) 00:31:23.910 14.629 - 14.690: 92.6692% ( 15) 00:31:23.910 14.690 - 14.750: 92.8689% ( 17) 00:31:23.910 14.750 - 14.811: 93.0921% ( 19) 00:31:23.910 14.811 - 14.872: 93.3271% ( 20) 00:31:23.910 14.872 - 14.933: 93.6208% ( 25) 00:31:23.910 14.933 - 14.994: 93.8087% ( 16) 00:31:23.910 14.994 - 15.055: 94.0202% ( 18) 00:31:23.910 15.055 - 15.116: 94.1729% ( 13) 00:31:23.910 15.116 - 15.177: 94.3374% ( 14) 00:31:23.910 15.177 - 15.238: 94.4666% ( 11) 00:31:23.910 15.238 - 15.299: 94.5136% ( 4) 00:31:23.910 15.299 - 15.360: 94.5724% ( 5) 00:31:23.910 15.360 - 15.421: 94.6194% ( 4) 00:31:23.910 15.421 - 15.482: 94.6898% ( 6) 00:31:23.910 15.482 - 15.543: 94.8308% ( 12) 00:31:23.910 15.543 - 15.604: 94.9601% ( 11) 00:31:23.910 15.604 - 15.726: 95.2538% ( 25) 00:31:23.910 15.726 - 15.848: 95.4417% ( 16) 00:31:23.910 15.848 - 15.970: 95.5592% ( 10) 00:31:23.910 15.970 - 16.091: 95.6414% ( 7) 00:31:23.910 16.091 - 16.213: 95.7002% ( 5) 00:31:23.910 16.213 - 16.335: 95.7707% ( 6) 00:31:23.910 16.335 - 16.457: 95.8764% ( 9) 00:31:23.910 16.457 - 16.579: 95.8999% ( 2) 00:31:23.910 16.579 - 16.701: 95.9821% ( 7) 00:31:23.910 16.701 - 16.823: 96.0526% ( 6) 00:31:23.910 16.823 - 16.945: 96.0996% ( 4) 00:31:23.910 16.945 - 17.067: 96.1349% ( 3) 00:31:23.910 17.067 - 17.189: 96.1701% ( 3) 00:31:23.910 17.189 - 17.310: 96.2406% ( 6) 00:31:23.910 17.310 - 17.432: 96.2993% ( 5) 00:31:23.910 17.432 - 17.554: 96.3111% ( 1) 00:31:23.910 17.554 - 17.676: 96.3933% ( 7) 00:31:23.910 17.676 - 17.798: 96.4403% ( 4) 00:31:23.910 17.798 - 17.920: 96.5108% ( 6) 00:31:23.910 17.920 - 18.042: 96.5930% ( 7) 00:31:23.910 18.042 - 18.164: 96.6165% ( 2) 00:31:23.910 18.164 - 18.286: 96.6518% ( 3) 00:31:23.910 18.286 - 18.408: 96.6635% ( 1) 00:31:23.910 18.408 - 18.530: 96.7458% ( 7) 00:31:23.910 18.530 - 18.651: 96.8163% ( 6) 00:31:23.910 18.651 - 18.773: 96.8280% ( 1) 00:31:23.910 18.773 - 18.895: 96.8398% ( 1) 00:31:23.910 18.895 - 19.017: 96.8515% ( 1) 00:31:23.910 19.017 - 19.139: 96.9102% ( 5) 00:31:23.910 19.139 - 19.261: 96.9337% ( 2) 00:31:23.910 19.261 - 19.383: 96.9455% ( 1) 00:31:23.910 19.383 - 19.505: 96.9807% ( 3) 00:31:23.910 19.505 - 19.627: 97.0160% ( 3) 00:31:23.910 19.627 - 19.749: 97.0747% ( 5) 00:31:23.910 19.749 - 19.870: 97.0982% ( 2) 00:31:23.910 19.870 - 19.992: 97.1217% ( 2) 00:31:23.910 19.992 - 20.114: 97.1570% ( 3) 00:31:23.910 20.114 - 20.236: 97.1922% ( 3) 00:31:23.910 20.236 - 20.358: 97.2274% ( 3) 00:31:23.910 20.358 - 20.480: 97.2392% ( 1) 00:31:23.910 20.480 - 20.602: 97.3214% ( 7) 00:31:23.910 20.602 - 20.724: 97.4037% ( 7) 00:31:23.910 20.724 - 20.846: 97.4389% ( 3) 00:31:23.910 20.846 - 20.968: 97.4977% ( 5) 00:31:23.910 20.968 - 21.090: 97.5446% ( 4) 00:31:23.910 21.090 - 21.211: 97.7091% ( 14) 00:31:23.910 21.211 - 21.333: 97.7679% ( 5) 00:31:23.910 21.333 - 21.455: 97.8383% ( 6) 00:31:23.910 21.455 - 21.577: 97.9088% ( 6) 00:31:23.910 21.577 - 21.699: 97.9676% ( 5) 00:31:23.910 21.699 - 21.821: 98.1203% ( 13) 00:31:23.910 21.821 - 21.943: 98.2495% ( 11) 00:31:23.910 21.943 - 22.065: 98.3553% ( 9) 00:31:23.910 22.065 - 22.187: 98.4727% ( 10) 00:31:23.910 22.187 - 22.309: 98.5667% ( 8) 00:31:23.910 22.309 - 22.430: 98.6255% ( 5) 00:31:23.910 22.430 - 22.552: 98.6842% ( 5) 00:31:23.910 22.552 - 22.674: 98.7312% ( 4) 00:31:23.910 22.674 - 22.796: 98.7664% ( 3) 00:31:23.910 22.796 - 22.918: 98.7899% ( 2) 00:31:23.910 22.918 - 23.040: 98.8369% ( 4) 00:31:23.910 23.040 - 23.162: 98.8722% ( 3) 00:31:23.910 23.162 - 23.284: 98.9074% ( 3) 00:31:23.910 23.406 - 23.528: 98.9544% ( 4) 00:31:23.910 23.528 - 23.650: 99.0014% ( 4) 00:31:23.910 23.650 - 23.771: 99.0249% ( 2) 00:31:23.910 23.771 - 23.893: 99.0719% ( 4) 00:31:23.910 23.893 - 24.015: 99.0836% ( 1) 00:31:23.910 24.137 - 24.259: 99.0954% ( 1) 00:31:23.910 24.259 - 24.381: 99.1776% ( 7) 00:31:23.910 24.381 - 24.503: 99.2129% ( 3) 00:31:23.910 24.503 - 24.625: 99.2246% ( 1) 00:31:23.910 24.869 - 24.990: 99.2364% ( 1) 00:31:23.910 24.990 - 25.112: 99.2834% ( 4) 00:31:23.910 25.112 - 25.234: 99.2951% ( 1) 00:31:23.910 25.234 - 25.356: 99.3186% ( 2) 00:31:23.910 25.356 - 25.478: 99.3421% ( 2) 00:31:23.910 25.478 - 25.600: 99.3539% ( 1) 00:31:23.910 25.600 - 25.722: 99.3656% ( 1) 00:31:23.910 25.722 - 25.844: 99.3891% ( 2) 00:31:23.910 25.844 - 25.966: 99.4008% ( 1) 00:31:23.911 26.088 - 26.210: 99.4243% ( 2) 00:31:23.911 26.331 - 26.453: 99.4596% ( 3) 00:31:23.911 26.453 - 26.575: 99.4948% ( 3) 00:31:23.911 26.697 - 26.819: 99.5066% ( 1) 00:31:23.911 26.819 - 26.941: 99.5183% ( 1) 00:31:23.911 27.063 - 27.185: 99.5301% ( 1) 00:31:23.911 27.307 - 27.429: 99.5418% ( 1) 00:31:23.911 27.429 - 27.550: 99.5536% ( 1) 00:31:23.911 27.794 - 27.916: 99.5653% ( 1) 00:31:23.911 28.526 - 28.648: 99.5771% ( 1) 00:31:23.911 28.770 - 28.891: 99.6006% ( 2) 00:31:23.911 29.013 - 29.135: 99.6123% ( 1) 00:31:23.911 29.135 - 29.257: 99.6476% ( 3) 00:31:23.911 29.379 - 29.501: 99.6593% ( 1) 00:31:23.911 29.623 - 29.745: 99.6711% ( 1) 00:31:23.911 29.867 - 29.989: 99.6828% ( 1) 00:31:23.911 30.110 - 30.232: 99.7063% ( 2) 00:31:23.911 30.354 - 30.476: 99.7180% ( 1) 00:31:23.911 30.476 - 30.598: 99.7298% ( 1) 00:31:23.911 30.598 - 30.720: 99.7415% ( 1) 00:31:23.911 30.964 - 31.086: 99.7533% ( 1) 00:31:23.911 31.208 - 31.451: 99.7650% ( 1) 00:31:23.911 31.451 - 31.695: 99.7768% ( 1) 00:31:23.911 31.695 - 31.939: 99.7885% ( 1) 00:31:23.911 31.939 - 32.183: 99.8003% ( 1) 00:31:23.911 32.183 - 32.427: 99.8355% ( 3) 00:31:23.911 33.158 - 33.402: 99.8590% ( 2) 00:31:23.911 42.910 - 43.154: 99.8708% ( 1) 00:31:23.911 45.105 - 45.349: 99.8825% ( 1) 00:31:23.911 46.324 - 46.568: 99.8943% ( 1) 00:31:23.911 46.568 - 46.811: 99.9060% ( 1) 00:31:23.911 50.469 - 50.712: 99.9178% ( 1) 00:31:23.911 54.613 - 54.857: 99.9295% ( 1) 00:31:23.911 56.564 - 56.808: 99.9413% ( 1) 00:31:23.911 80.457 - 80.945: 99.9530% ( 1) 00:31:23.911 83.870 - 84.358: 99.9648% ( 1) 00:31:23.911 98.499 - 98.987: 99.9765% ( 1) 00:31:23.911 123.368 - 123.855: 99.9883% ( 1) 00:31:23.911 130.682 - 131.657: 100.0000% ( 1) 00:31:23.911 00:31:23.911 ************************************ 00:31:23.911 END TEST nvme_overhead 00:31:23.911 ************************************ 00:31:23.911 00:31:23.911 real 0m1.380s 00:31:23.911 user 0m1.137s 00:31:23.911 sys 0m0.195s 00:31:23.911 13:26:16 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:23.911 13:26:16 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:31:24.169 13:26:17 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:31:24.169 13:26:17 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:24.169 13:26:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:24.169 13:26:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:24.169 ************************************ 00:31:24.169 START TEST nvme_arbitration 00:31:24.169 ************************************ 00:31:24.169 13:26:17 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:31:28.375 Initializing NVMe Controllers 00:31:28.375 Attached to 0000:00:10.0 00:31:28.375 Attached to 0000:00:11.0 00:31:28.375 Attached to 0000:00:13.0 00:31:28.375 Attached to 0000:00:12.0 00:31:28.375 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:31:28.375 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:31:28.375 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:31:28.375 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:31:28.375 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:31:28.375 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:31:28.375 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:31:28.375 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:31:28.375 Initialization complete. Launching workers. 00:31:28.375 Starting thread on core 1 with urgent priority queue 00:31:28.375 Starting thread on core 2 with urgent priority queue 00:31:28.375 Starting thread on core 3 with urgent priority queue 00:31:28.375 Starting thread on core 0 with urgent priority queue 00:31:28.375 QEMU NVMe Ctrl (12340 ) core 0: 512.00 IO/s 195.31 secs/100000 ios 00:31:28.375 QEMU NVMe Ctrl (12342 ) core 0: 512.00 IO/s 195.31 secs/100000 ios 00:31:28.375 QEMU NVMe Ctrl (12341 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:31:28.375 QEMU NVMe Ctrl (12342 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:31:28.375 QEMU NVMe Ctrl (12343 ) core 2: 469.33 IO/s 213.07 secs/100000 ios 00:31:28.375 QEMU NVMe Ctrl (12342 ) core 3: 426.67 IO/s 234.38 secs/100000 ios 00:31:28.375 ======================================================== 00:31:28.375 00:31:28.375 00:31:28.375 real 0m3.555s 00:31:28.375 user 0m9.436s 00:31:28.375 sys 0m0.242s 00:31:28.375 13:26:20 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:28.375 13:26:20 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:31:28.375 ************************************ 00:31:28.375 END TEST nvme_arbitration 00:31:28.375 ************************************ 00:31:28.375 13:26:20 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:31:28.375 13:26:20 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:28.375 13:26:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:28.375 13:26:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:28.375 ************************************ 00:31:28.375 START TEST nvme_single_aen 00:31:28.375 ************************************ 00:31:28.375 13:26:20 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:31:28.375 Asynchronous Event Request test 00:31:28.375 Attached to 0000:00:10.0 00:31:28.375 Attached to 0000:00:11.0 00:31:28.375 Attached to 0000:00:13.0 00:31:28.375 Attached to 0000:00:12.0 00:31:28.375 Reset controller to setup AER completions for this process 00:31:28.375 Registering asynchronous event callbacks... 00:31:28.375 Getting orig temperature thresholds of all controllers 00:31:28.375 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:28.375 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:28.375 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:28.375 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:28.375 Setting all controllers temperature threshold low to trigger AER 00:31:28.375 Waiting for all controllers temperature threshold to be set lower 00:31:28.375 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:28.375 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:31:28.375 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:28.375 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:31:28.375 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:28.375 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:31:28.375 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:28.375 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:31:28.375 Waiting for all controllers to trigger AER and reset threshold 00:31:28.375 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:28.375 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:28.375 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:28.375 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:28.375 Cleaning up... 00:31:28.375 ************************************ 00:31:28.375 END TEST nvme_single_aen 00:31:28.375 ************************************ 00:31:28.375 00:31:28.375 real 0m0.397s 00:31:28.375 user 0m0.151s 00:31:28.375 sys 0m0.198s 00:31:28.375 13:26:21 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:28.375 13:26:21 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:31:28.375 13:26:21 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:31:28.375 13:26:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:28.375 13:26:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:28.375 13:26:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:28.375 ************************************ 00:31:28.375 START TEST nvme_doorbell_aers 00:31:28.375 ************************************ 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:28.375 13:26:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:31:28.633 [2024-12-06 13:26:21.554037] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:31:38.602 Executing: test_write_invalid_db 00:31:38.602 Waiting for AER completion... 00:31:38.602 Failure: test_write_invalid_db 00:31:38.602 00:31:38.602 Executing: test_invalid_db_write_overflow_sq 00:31:38.602 Waiting for AER completion... 00:31:38.602 Failure: test_invalid_db_write_overflow_sq 00:31:38.602 00:31:38.602 Executing: test_invalid_db_write_overflow_cq 00:31:38.602 Waiting for AER completion... 00:31:38.602 Failure: test_invalid_db_write_overflow_cq 00:31:38.602 00:31:38.602 13:26:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:38.602 13:26:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:31:38.602 [2024-12-06 13:26:31.629424] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:31:48.572 Executing: test_write_invalid_db 00:31:48.572 Waiting for AER completion... 00:31:48.572 Failure: test_write_invalid_db 00:31:48.572 00:31:48.572 Executing: test_invalid_db_write_overflow_sq 00:31:48.572 Waiting for AER completion... 00:31:48.572 Failure: test_invalid_db_write_overflow_sq 00:31:48.572 00:31:48.572 Executing: test_invalid_db_write_overflow_cq 00:31:48.572 Waiting for AER completion... 00:31:48.572 Failure: test_invalid_db_write_overflow_cq 00:31:48.572 00:31:48.572 13:26:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:48.572 13:26:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:31:48.572 [2024-12-06 13:26:41.611457] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:31:58.598 Executing: test_write_invalid_db 00:31:58.598 Waiting for AER completion... 00:31:58.598 Failure: test_write_invalid_db 00:31:58.598 00:31:58.598 Executing: test_invalid_db_write_overflow_sq 00:31:58.598 Waiting for AER completion... 00:31:58.598 Failure: test_invalid_db_write_overflow_sq 00:31:58.598 00:31:58.598 Executing: test_invalid_db_write_overflow_cq 00:31:58.598 Waiting for AER completion... 00:31:58.598 Failure: test_invalid_db_write_overflow_cq 00:31:58.598 00:31:58.598 13:26:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:58.598 13:26:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:31:58.856 [2024-12-06 13:26:51.745037] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 Executing: test_write_invalid_db 00:32:08.829 Waiting for AER completion... 00:32:08.829 Failure: test_write_invalid_db 00:32:08.829 00:32:08.829 Executing: test_invalid_db_write_overflow_sq 00:32:08.829 Waiting for AER completion... 00:32:08.829 Failure: test_invalid_db_write_overflow_sq 00:32:08.829 00:32:08.829 Executing: test_invalid_db_write_overflow_cq 00:32:08.829 Waiting for AER completion... 00:32:08.829 Failure: test_invalid_db_write_overflow_cq 00:32:08.829 00:32:08.829 00:32:08.829 real 0m40.337s 00:32:08.829 user 0m28.533s 00:32:08.829 sys 0m11.360s 00:32:08.829 13:27:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.829 13:27:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:32:08.829 ************************************ 00:32:08.829 END TEST nvme_doorbell_aers 00:32:08.829 ************************************ 00:32:08.829 13:27:01 nvme -- nvme/nvme.sh@97 -- # uname 00:32:08.829 13:27:01 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:32:08.829 13:27:01 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:32:08.829 13:27:01 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:32:08.829 13:27:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:08.829 13:27:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:08.829 ************************************ 00:32:08.829 START TEST nvme_multi_aen 00:32:08.829 ************************************ 00:32:08.829 13:27:01 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:32:08.829 [2024-12-06 13:27:01.864008] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 [2024-12-06 13:27:01.864131] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 [2024-12-06 13:27:01.864159] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 [2024-12-06 13:27:01.866248] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 [2024-12-06 13:27:01.866524] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 [2024-12-06 13:27:01.866548] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 [2024-12-06 13:27:01.868136] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 [2024-12-06 13:27:01.868184] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 [2024-12-06 13:27:01.868201] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 [2024-12-06 13:27:01.869762] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 [2024-12-06 13:27:01.869966] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 [2024-12-06 13:27:01.869989] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65310) is not found. Dropping the request. 00:32:08.829 Child process pid: 65827 00:32:09.087 [Child] Asynchronous Event Request test 00:32:09.087 [Child] Attached to 0000:00:10.0 00:32:09.087 [Child] Attached to 0000:00:11.0 00:32:09.087 [Child] Attached to 0000:00:13.0 00:32:09.088 [Child] Attached to 0000:00:12.0 00:32:09.088 [Child] Registering asynchronous event callbacks... 00:32:09.088 [Child] Getting orig temperature thresholds of all controllers 00:32:09.088 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:09.088 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:09.088 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:09.088 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:09.088 [Child] Waiting for all controllers to trigger AER and reset threshold 00:32:09.088 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:09.088 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:09.088 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:09.088 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:09.088 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:09.088 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:09.088 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:09.088 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:09.088 [Child] Cleaning up... 00:32:09.346 Asynchronous Event Request test 00:32:09.346 Attached to 0000:00:10.0 00:32:09.346 Attached to 0000:00:11.0 00:32:09.346 Attached to 0000:00:13.0 00:32:09.346 Attached to 0000:00:12.0 00:32:09.346 Reset controller to setup AER completions for this process 00:32:09.346 Registering asynchronous event callbacks... 00:32:09.346 Getting orig temperature thresholds of all controllers 00:32:09.346 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:09.346 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:09.346 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:09.346 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:09.346 Setting all controllers temperature threshold low to trigger AER 00:32:09.346 Waiting for all controllers temperature threshold to be set lower 00:32:09.346 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:09.346 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:32:09.346 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:09.346 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:32:09.346 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:09.346 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:32:09.346 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:09.346 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:32:09.346 Waiting for all controllers to trigger AER and reset threshold 00:32:09.346 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:09.346 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:09.346 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:09.346 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:09.346 Cleaning up... 00:32:09.346 00:32:09.346 real 0m0.741s 00:32:09.346 user 0m0.240s 00:32:09.346 sys 0m0.388s 00:32:09.346 13:27:02 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.346 13:27:02 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:32:09.346 ************************************ 00:32:09.346 END TEST nvme_multi_aen 00:32:09.346 ************************************ 00:32:09.346 13:27:02 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:32:09.346 13:27:02 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:09.346 13:27:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.346 13:27:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:09.346 ************************************ 00:32:09.346 START TEST nvme_startup 00:32:09.346 ************************************ 00:32:09.346 13:27:02 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:32:09.604 Initializing NVMe Controllers 00:32:09.604 Attached to 0000:00:10.0 00:32:09.604 Attached to 0000:00:11.0 00:32:09.604 Attached to 0000:00:13.0 00:32:09.604 Attached to 0000:00:12.0 00:32:09.604 Initialization complete. 00:32:09.604 Time used:228245.875 (us). 00:32:09.604 ************************************ 00:32:09.604 END TEST nvme_startup 00:32:09.604 ************************************ 00:32:09.604 00:32:09.604 real 0m0.332s 00:32:09.604 user 0m0.101s 00:32:09.604 sys 0m0.164s 00:32:09.604 13:27:02 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.604 13:27:02 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:32:09.604 13:27:02 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:32:09.604 13:27:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:09.604 13:27:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.604 13:27:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:09.862 ************************************ 00:32:09.862 START TEST nvme_multi_secondary 00:32:09.862 ************************************ 00:32:09.862 13:27:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:32:09.862 13:27:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65883 00:32:09.862 13:27:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:32:09.862 13:27:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65884 00:32:09.863 13:27:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:32:09.863 13:27:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:32:13.146 Initializing NVMe Controllers 00:32:13.146 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:13.146 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:32:13.146 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:32:13.146 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:32:13.146 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:32:13.146 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:32:13.146 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:32:13.146 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:32:13.146 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:32:13.146 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:32:13.146 Initialization complete. Launching workers. 00:32:13.146 ======================================================== 00:32:13.146 Latency(us) 00:32:13.146 Device Information : IOPS MiB/s Average min max 00:32:13.146 PCIE (0000:00:10.0) NSID 1 from core 1: 5660.93 22.11 2824.58 1167.50 11837.76 00:32:13.146 PCIE (0000:00:11.0) NSID 1 from core 1: 5660.93 22.11 2826.11 1154.30 12174.83 00:32:13.146 PCIE (0000:00:13.0) NSID 1 from core 1: 5660.93 22.11 2826.11 1121.34 12363.65 00:32:13.146 PCIE (0000:00:12.0) NSID 1 from core 1: 5660.93 22.11 2826.08 1141.02 12608.27 00:32:13.146 PCIE (0000:00:12.0) NSID 2 from core 1: 5660.93 22.11 2826.18 1130.11 13190.68 00:32:13.146 PCIE (0000:00:12.0) NSID 3 from core 1: 5660.93 22.11 2826.20 1201.96 13297.62 00:32:13.146 ======================================================== 00:32:13.146 Total : 33965.59 132.68 2825.88 1121.34 13297.62 00:32:13.146 00:32:13.146 Initializing NVMe Controllers 00:32:13.146 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:13.146 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:32:13.146 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:32:13.147 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:32:13.147 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:32:13.147 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:32:13.147 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:32:13.147 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:32:13.147 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:32:13.147 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:32:13.147 Initialization complete. Launching workers. 00:32:13.147 ======================================================== 00:32:13.147 Latency(us) 00:32:13.147 Device Information : IOPS MiB/s Average min max 00:32:13.147 PCIE (0000:00:10.0) NSID 1 from core 2: 2404.21 9.39 6646.95 1493.75 13656.81 00:32:13.147 PCIE (0000:00:11.0) NSID 1 from core 2: 2404.21 9.39 6645.32 1467.45 14066.99 00:32:13.147 PCIE (0000:00:13.0) NSID 1 from core 2: 2404.21 9.39 6645.41 1461.73 13489.88 00:32:13.147 PCIE (0000:00:12.0) NSID 1 from core 2: 2404.21 9.39 6645.22 1406.62 13769.04 00:32:13.147 PCIE (0000:00:12.0) NSID 2 from core 2: 2404.21 9.39 6645.00 1422.02 14157.90 00:32:13.147 PCIE (0000:00:12.0) NSID 3 from core 2: 2404.21 9.39 6644.86 1377.55 13820.95 00:32:13.147 ======================================================== 00:32:13.147 Total : 14425.27 56.35 6645.46 1377.55 14157.90 00:32:13.147 00:32:13.404 13:27:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65883 00:32:15.421 Initializing NVMe Controllers 00:32:15.421 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:15.421 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:32:15.421 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:32:15.421 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:32:15.421 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:32:15.421 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:32:15.421 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:32:15.421 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:32:15.421 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:32:15.421 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:32:15.421 Initialization complete. Launching workers. 00:32:15.421 ======================================================== 00:32:15.421 Latency(us) 00:32:15.421 Device Information : IOPS MiB/s Average min max 00:32:15.421 PCIE (0000:00:10.0) NSID 1 from core 0: 7633.82 29.82 2094.19 1009.62 9712.23 00:32:15.421 PCIE (0000:00:11.0) NSID 1 from core 0: 7633.82 29.82 2095.40 1053.50 9690.33 00:32:15.421 PCIE (0000:00:13.0) NSID 1 from core 0: 7633.82 29.82 2095.35 1055.31 9715.89 00:32:15.421 PCIE (0000:00:12.0) NSID 1 from core 0: 7633.82 29.82 2095.28 1052.10 9297.08 00:32:15.421 PCIE (0000:00:12.0) NSID 2 from core 0: 7633.82 29.82 2095.23 1033.60 9596.07 00:32:15.421 PCIE (0000:00:12.0) NSID 3 from core 0: 7633.82 29.82 2095.17 1036.55 10703.35 00:32:15.421 ======================================================== 00:32:15.421 Total : 45802.92 178.92 2095.10 1009.62 10703.35 00:32:15.421 00:32:15.421 13:27:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65884 00:32:15.421 13:27:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65962 00:32:15.421 13:27:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:32:15.421 13:27:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65963 00:32:15.421 13:27:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:32:15.421 13:27:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:32:19.620 Initializing NVMe Controllers 00:32:19.620 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:19.620 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:32:19.620 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:32:19.620 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:32:19.620 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:32:19.620 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:32:19.620 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:32:19.620 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:32:19.620 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:32:19.620 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:32:19.620 Initialization complete. Launching workers. 00:32:19.620 ======================================================== 00:32:19.620 Latency(us) 00:32:19.620 Device Information : IOPS MiB/s Average min max 00:32:19.620 PCIE (0000:00:10.0) NSID 1 from core 1: 4856.58 18.97 3292.63 1236.17 13910.45 00:32:19.620 PCIE (0000:00:11.0) NSID 1 from core 1: 4856.58 18.97 3293.95 1262.92 14349.16 00:32:19.620 PCIE (0000:00:13.0) NSID 1 from core 1: 4856.58 18.97 3294.02 1243.04 14287.50 00:32:19.620 PCIE (0000:00:12.0) NSID 1 from core 1: 4856.58 18.97 3293.96 1255.32 14180.30 00:32:19.620 PCIE (0000:00:12.0) NSID 2 from core 1: 4856.58 18.97 3294.05 1249.29 14091.87 00:32:19.620 PCIE (0000:00:12.0) NSID 3 from core 1: 4856.58 18.97 3293.99 1244.17 14096.45 00:32:19.620 ======================================================== 00:32:19.620 Total : 29139.46 113.83 3293.76 1236.17 14349.16 00:32:19.620 00:32:19.620 Initializing NVMe Controllers 00:32:19.620 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:19.620 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:32:19.620 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:32:19.620 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:32:19.620 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:32:19.620 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:32:19.620 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:32:19.620 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:32:19.620 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:32:19.620 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:32:19.620 Initialization complete. Launching workers. 00:32:19.620 ======================================================== 00:32:19.620 Latency(us) 00:32:19.620 Device Information : IOPS MiB/s Average min max 00:32:19.620 PCIE (0000:00:10.0) NSID 1 from core 0: 5015.79 19.59 3187.98 1174.50 6963.25 00:32:19.620 PCIE (0000:00:11.0) NSID 1 from core 0: 5015.79 19.59 3189.19 1212.30 7517.12 00:32:19.620 PCIE (0000:00:13.0) NSID 1 from core 0: 5015.79 19.59 3189.03 1190.98 7721.91 00:32:19.620 PCIE (0000:00:12.0) NSID 1 from core 0: 5015.79 19.59 3188.81 1190.53 7321.42 00:32:19.620 PCIE (0000:00:12.0) NSID 2 from core 0: 5015.79 19.59 3188.64 1198.58 7016.04 00:32:19.620 PCIE (0000:00:12.0) NSID 3 from core 0: 5015.79 19.59 3188.47 1091.23 6971.94 00:32:19.620 ======================================================== 00:32:19.620 Total : 30094.75 117.56 3188.68 1091.23 7721.91 00:32:19.620 00:32:21.095 Initializing NVMe Controllers 00:32:21.095 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:21.095 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:32:21.095 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:32:21.095 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:32:21.095 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:32:21.095 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:32:21.095 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:32:21.095 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:32:21.095 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:32:21.095 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:32:21.095 Initialization complete. Launching workers. 00:32:21.095 ======================================================== 00:32:21.095 Latency(us) 00:32:21.095 Device Information : IOPS MiB/s Average min max 00:32:21.095 PCIE (0000:00:10.0) NSID 1 from core 2: 3209.16 12.54 4978.97 1219.80 14857.20 00:32:21.095 PCIE (0000:00:11.0) NSID 1 from core 2: 3209.16 12.54 4981.23 1158.35 14680.32 00:32:21.095 PCIE (0000:00:13.0) NSID 1 from core 2: 3209.16 12.54 4981.18 1162.69 16278.76 00:32:21.095 PCIE (0000:00:12.0) NSID 1 from core 2: 3209.16 12.54 4980.85 1066.58 14244.75 00:32:21.095 PCIE (0000:00:12.0) NSID 2 from core 2: 3209.16 12.54 4981.04 1006.84 16316.44 00:32:21.095 PCIE (0000:00:12.0) NSID 3 from core 2: 3212.36 12.55 4975.75 940.78 15815.65 00:32:21.095 ======================================================== 00:32:21.095 Total : 19258.19 75.23 4979.84 940.78 16316.44 00:32:21.095 00:32:21.095 ************************************ 00:32:21.095 END TEST nvme_multi_secondary 00:32:21.095 ************************************ 00:32:21.095 13:27:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65962 00:32:21.095 13:27:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65963 00:32:21.095 00:32:21.095 real 0m11.062s 00:32:21.095 user 0m18.714s 00:32:21.095 sys 0m1.167s 00:32:21.095 13:27:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.095 13:27:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:32:21.095 13:27:13 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:32:21.095 13:27:13 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:32:21.095 13:27:13 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64861 ]] 00:32:21.095 13:27:13 nvme -- common/autotest_common.sh@1094 -- # kill 64861 00:32:21.095 13:27:13 nvme -- common/autotest_common.sh@1095 -- # wait 64861 00:32:21.095 [2024-12-06 13:27:13.827702] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.828581] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.828633] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.828655] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.831513] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.831560] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.831577] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.831595] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.834608] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.834657] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.834678] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.834700] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.837994] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.838046] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.838065] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 [2024-12-06 13:27:13.838087] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65826) is not found. Dropping the request. 00:32:21.095 13:27:14 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:32:21.095 13:27:14 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:32:21.095 13:27:14 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:32:21.095 13:27:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:21.095 13:27:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.095 13:27:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:21.095 ************************************ 00:32:21.095 START TEST bdev_nvme_reset_stuck_adm_cmd 00:32:21.095 ************************************ 00:32:21.095 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:32:21.095 * Looking for test storage... 00:32:21.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:21.095 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:21.095 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:32:21.095 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:21.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.356 --rc genhtml_branch_coverage=1 00:32:21.356 --rc genhtml_function_coverage=1 00:32:21.356 --rc genhtml_legend=1 00:32:21.356 --rc geninfo_all_blocks=1 00:32:21.356 --rc geninfo_unexecuted_blocks=1 00:32:21.356 00:32:21.356 ' 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:21.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.356 --rc genhtml_branch_coverage=1 00:32:21.356 --rc genhtml_function_coverage=1 00:32:21.356 --rc genhtml_legend=1 00:32:21.356 --rc geninfo_all_blocks=1 00:32:21.356 --rc geninfo_unexecuted_blocks=1 00:32:21.356 00:32:21.356 ' 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:21.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.356 --rc genhtml_branch_coverage=1 00:32:21.356 --rc genhtml_function_coverage=1 00:32:21.356 --rc genhtml_legend=1 00:32:21.356 --rc geninfo_all_blocks=1 00:32:21.356 --rc geninfo_unexecuted_blocks=1 00:32:21.356 00:32:21.356 ' 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:21.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.356 --rc genhtml_branch_coverage=1 00:32:21.356 --rc genhtml_function_coverage=1 00:32:21.356 --rc genhtml_legend=1 00:32:21.356 --rc geninfo_all_blocks=1 00:32:21.356 --rc geninfo_unexecuted_blocks=1 00:32:21.356 00:32:21.356 ' 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66119 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66119 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 66119 ']' 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.356 13:27:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:32:21.357 [2024-12-06 13:27:14.447928] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:32:21.357 [2024-12-06 13:27:14.448075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66119 ] 00:32:21.616 [2024-12-06 13:27:14.662596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:21.875 [2024-12-06 13:27:14.824722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.875 [2024-12-06 13:27:14.824864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:21.875 [2024-12-06 13:27:14.825023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.875 [2024-12-06 13:27:14.825056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:23.250 13:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:23.250 13:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:32:23.250 13:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:32:23.250 13:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.250 13:27:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:32:23.250 nvme0n1 00:32:23.250 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.250 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:32:23.250 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_JNwyf.txt 00:32:23.250 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:32:23.250 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.250 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:32:23.250 true 00:32:23.250 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.250 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:32:23.250 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733491636 00:32:23.250 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66154 00:32:23.251 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:23.251 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:32:23.251 13:27:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:32:25.148 [2024-12-06 13:27:18.069163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:32:25.148 [2024-12-06 13:27:18.069651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:25.148 [2024-12-06 13:27:18.069696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:32:25.148 [2024-12-06 13:27:18.069718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.148 [2024-12-06 13:27:18.071664] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66154 00:32:25.148 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66154 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66154 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_JNwyf.txt 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_JNwyf.txt 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66119 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 66119 ']' 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 66119 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66119 00:32:25.148 killing process with pid 66119 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66119' 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 66119 00:32:25.148 13:27:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 66119 00:32:28.429 13:27:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:32:28.429 13:27:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:32:28.429 ************************************ 00:32:28.429 END TEST bdev_nvme_reset_stuck_adm_cmd 00:32:28.429 ************************************ 00:32:28.430 00:32:28.430 real 0m7.228s 00:32:28.430 user 0m25.608s 00:32:28.430 sys 0m0.942s 00:32:28.430 13:27:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.430 13:27:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:32:28.430 13:27:21 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:32:28.430 13:27:21 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:32:28.430 13:27:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:28.430 13:27:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:28.430 13:27:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:28.430 ************************************ 00:32:28.430 START TEST nvme_fio 00:32:28.430 ************************************ 00:32:28.430 13:27:21 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:32:28.430 13:27:21 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:32:28.430 13:27:21 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:32:28.430 13:27:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:32:28.430 13:27:21 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:28.430 13:27:21 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:32:28.430 13:27:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:28.430 13:27:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:28.430 13:27:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:28.430 13:27:21 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:32:28.430 13:27:21 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:28.430 13:27:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:32:28.430 13:27:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:32:28.430 13:27:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:28.430 13:27:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:32:28.430 13:27:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:28.689 13:27:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:32:28.689 13:27:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:29.258 13:27:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:32:29.258 13:27:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:29.258 13:27:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:32:29.516 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:29.516 fio-3.35 00:32:29.516 Starting 1 thread 00:32:32.820 00:32:32.820 test: (groupid=0, jobs=1): err= 0: pid=66314: Fri Dec 6 13:27:25 2024 00:32:32.820 read: IOPS=17.8k, BW=69.6MiB/s (73.0MB/s)(139MiB/2001msec) 00:32:32.820 slat (nsec): min=4790, max=75004, avg=6335.87, stdev=1498.16 00:32:32.820 clat (usec): min=390, max=9379, avg=3569.81, stdev=465.02 00:32:32.820 lat (usec): min=396, max=9454, avg=3576.14, stdev=465.65 00:32:32.820 clat percentiles (usec): 00:32:32.820 | 1.00th=[ 2868], 5.00th=[ 3163], 10.00th=[ 3228], 20.00th=[ 3261], 00:32:32.820 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3425], 60.00th=[ 3490], 00:32:32.820 | 70.00th=[ 3621], 80.00th=[ 4015], 90.00th=[ 4228], 95.00th=[ 4359], 00:32:32.820 | 99.00th=[ 4555], 99.50th=[ 5538], 99.90th=[ 7046], 99.95th=[ 7635], 00:32:32.820 | 99.99th=[ 9241] 00:32:32.820 bw ( KiB/s): min=65032, max=74784, per=98.69%, avg=70360.00, stdev=4938.45, samples=3 00:32:32.820 iops : min=16258, max=18696, avg=17590.00, stdev=1234.61, samples=3 00:32:32.820 write: IOPS=17.8k, BW=69.6MiB/s (73.0MB/s)(139MiB/2001msec); 0 zone resets 00:32:32.820 slat (nsec): min=5123, max=53015, avg=6474.56, stdev=1486.75 00:32:32.820 clat (usec): min=400, max=9244, avg=3579.68, stdev=470.38 00:32:32.820 lat (usec): min=405, max=9258, avg=3586.16, stdev=470.99 00:32:32.820 clat percentiles (usec): 00:32:32.820 | 1.00th=[ 2900], 5.00th=[ 3163], 10.00th=[ 3228], 20.00th=[ 3294], 00:32:32.820 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3425], 60.00th=[ 3490], 00:32:32.820 | 70.00th=[ 3654], 80.00th=[ 4015], 90.00th=[ 4228], 95.00th=[ 4359], 00:32:32.820 | 99.00th=[ 4621], 99.50th=[ 5735], 99.90th=[ 7111], 99.95th=[ 7832], 00:32:32.820 | 99.99th=[ 8979] 00:32:32.820 bw ( KiB/s): min=64968, max=74768, per=98.69%, avg=70352.00, stdev=4971.19, samples=3 00:32:32.820 iops : min=16242, max=18692, avg=17588.00, stdev=1242.80, samples=3 00:32:32.820 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:32.820 lat (msec) : 2=0.26%, 4=79.32%, 10=20.39% 00:32:32.820 cpu : usr=99.20%, sys=0.15%, ctx=5, majf=0, minf=608 00:32:32.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:32.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:32.820 issued rwts: total=35663,35659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:32.820 00:32:32.820 Run status group 0 (all jobs): 00:32:32.820 READ: bw=69.6MiB/s (73.0MB/s), 69.6MiB/s-69.6MiB/s (73.0MB/s-73.0MB/s), io=139MiB (146MB), run=2001-2001msec 00:32:32.820 WRITE: bw=69.6MiB/s (73.0MB/s), 69.6MiB/s-69.6MiB/s (73.0MB/s-73.0MB/s), io=139MiB (146MB), run=2001-2001msec 00:32:33.078 ----------------------------------------------------- 00:32:33.078 Suppressions used: 00:32:33.078 count bytes template 00:32:33.078 1 32 /usr/src/fio/parse.c 00:32:33.078 1 8 libtcmalloc_minimal.so 00:32:33.078 ----------------------------------------------------- 00:32:33.078 00:32:33.078 13:27:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:33.078 13:27:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:33.078 13:27:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:32:33.078 13:27:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:33.336 13:27:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:33.336 13:27:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:32:33.594 13:27:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:32:33.594 13:27:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:33.594 13:27:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:32:33.852 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:33.852 fio-3.35 00:32:33.852 Starting 1 thread 00:32:38.042 00:32:38.042 test: (groupid=0, jobs=1): err= 0: pid=66387: Fri Dec 6 13:27:30 2024 00:32:38.042 read: IOPS=18.4k, BW=71.8MiB/s (75.3MB/s)(144MiB/2001msec) 00:32:38.042 slat (usec): min=5, max=101, avg= 6.21, stdev= 1.51 00:32:38.042 clat (usec): min=319, max=8106, avg=3462.95, stdev=427.76 00:32:38.042 lat (usec): min=325, max=8207, avg=3469.16, stdev=428.26 00:32:38.042 clat percentiles (usec): 00:32:38.042 | 1.00th=[ 2900], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3228], 00:32:38.042 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3326], 60.00th=[ 3392], 00:32:38.042 | 70.00th=[ 3425], 80.00th=[ 3556], 90.00th=[ 4113], 95.00th=[ 4424], 00:32:38.042 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 5997], 99.95th=[ 6587], 00:32:38.042 | 99.99th=[ 8029] 00:32:38.042 bw ( KiB/s): min=70664, max=77480, per=99.91%, avg=73496.00, stdev=3551.03, samples=3 00:32:38.042 iops : min=17666, max=19370, avg=18374.00, stdev=887.76, samples=3 00:32:38.042 write: IOPS=18.4k, BW=71.8MiB/s (75.3MB/s)(144MiB/2001msec); 0 zone resets 00:32:38.042 slat (nsec): min=5099, max=52294, avg=6371.25, stdev=1431.11 00:32:38.042 clat (usec): min=211, max=8027, avg=3463.89, stdev=429.33 00:32:38.042 lat (usec): min=217, max=8042, avg=3470.26, stdev=429.81 00:32:38.042 clat percentiles (usec): 00:32:38.042 | 1.00th=[ 2868], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3228], 00:32:38.042 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3326], 60.00th=[ 3392], 00:32:38.042 | 70.00th=[ 3425], 80.00th=[ 3556], 90.00th=[ 4113], 95.00th=[ 4424], 00:32:38.042 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 6194], 99.95th=[ 6718], 00:32:38.042 | 99.99th=[ 7898] 00:32:38.042 bw ( KiB/s): min=70656, max=76984, per=99.78%, avg=73405.33, stdev=3244.49, samples=3 00:32:38.042 iops : min=17664, max=19246, avg=18351.33, stdev=811.12, samples=3 00:32:38.042 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:38.042 lat (msec) : 2=0.09%, 4=86.69%, 10=13.18% 00:32:38.042 cpu : usr=99.05%, sys=0.25%, ctx=1, majf=0, minf=608 00:32:38.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:38.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:38.042 issued rwts: total=36799,36803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:38.042 00:32:38.042 Run status group 0 (all jobs): 00:32:38.042 READ: bw=71.8MiB/s (75.3MB/s), 71.8MiB/s-71.8MiB/s (75.3MB/s-75.3MB/s), io=144MiB (151MB), run=2001-2001msec 00:32:38.042 WRITE: bw=71.8MiB/s (75.3MB/s), 71.8MiB/s-71.8MiB/s (75.3MB/s-75.3MB/s), io=144MiB (151MB), run=2001-2001msec 00:32:38.042 ----------------------------------------------------- 00:32:38.042 Suppressions used: 00:32:38.042 count bytes template 00:32:38.042 1 32 /usr/src/fio/parse.c 00:32:38.042 1 8 libtcmalloc_minimal.so 00:32:38.042 ----------------------------------------------------- 00:32:38.042 00:32:38.042 13:27:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:38.042 13:27:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:38.042 13:27:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:32:38.042 13:27:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:38.042 13:27:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:32:38.042 13:27:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:38.300 13:27:31 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:32:38.300 13:27:31 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:38.300 13:27:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:32:38.557 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:38.557 fio-3.35 00:32:38.557 Starting 1 thread 00:32:42.737 00:32:42.737 test: (groupid=0, jobs=1): err= 0: pid=66453: Fri Dec 6 13:27:35 2024 00:32:42.737 read: IOPS=18.0k, BW=70.2MiB/s (73.6MB/s)(141MiB/2001msec) 00:32:42.737 slat (usec): min=4, max=210, avg= 6.38, stdev= 2.00 00:32:42.737 clat (usec): min=336, max=8998, avg=3539.31, stdev=570.43 00:32:42.737 lat (usec): min=346, max=9003, avg=3545.69, stdev=571.29 00:32:42.737 clat percentiles (usec): 00:32:42.737 | 1.00th=[ 2999], 5.00th=[ 3097], 10.00th=[ 3130], 20.00th=[ 3195], 00:32:42.737 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3294], 60.00th=[ 3326], 00:32:42.737 | 70.00th=[ 3884], 80.00th=[ 4047], 90.00th=[ 4113], 95.00th=[ 4228], 00:32:42.737 | 99.00th=[ 5538], 99.50th=[ 7242], 99.90th=[ 8291], 99.95th=[ 8455], 00:32:42.737 | 99.99th=[ 8717] 00:32:42.737 bw ( KiB/s): min=71520, max=79176, per=100.00%, avg=75842.67, stdev=3922.71, samples=3 00:32:42.737 iops : min=17880, max=19794, avg=18960.67, stdev=980.68, samples=3 00:32:42.737 write: IOPS=18.0k, BW=70.3MiB/s (73.7MB/s)(141MiB/2001msec); 0 zone resets 00:32:42.737 slat (usec): min=5, max=574, avg= 6.52, stdev= 3.41 00:32:42.737 clat (usec): min=546, max=8966, avg=3544.83, stdev=562.25 00:32:42.737 lat (usec): min=554, max=8974, avg=3551.35, stdev=563.10 00:32:42.737 clat percentiles (usec): 00:32:42.737 | 1.00th=[ 2999], 5.00th=[ 3097], 10.00th=[ 3130], 20.00th=[ 3195], 00:32:42.737 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3294], 60.00th=[ 3359], 00:32:42.737 | 70.00th=[ 3916], 80.00th=[ 4047], 90.00th=[ 4146], 95.00th=[ 4228], 00:32:42.737 | 99.00th=[ 5407], 99.50th=[ 7046], 99.90th=[ 8356], 99.95th=[ 8455], 00:32:42.737 | 99.99th=[ 8848] 00:32:42.737 bw ( KiB/s): min=71488, max=79272, per=100.00%, avg=75898.67, stdev=3994.33, samples=3 00:32:42.737 iops : min=17872, max=19818, avg=18974.67, stdev=998.58, samples=3 00:32:42.737 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:42.737 lat (msec) : 2=0.03%, 4=75.89%, 10=24.05% 00:32:42.737 cpu : usr=99.00%, sys=0.25%, ctx=4, majf=0, minf=608 00:32:42.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:42.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:42.737 issued rwts: total=35968,35995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:42.737 00:32:42.737 Run status group 0 (all jobs): 00:32:42.737 READ: bw=70.2MiB/s (73.6MB/s), 70.2MiB/s-70.2MiB/s (73.6MB/s-73.6MB/s), io=141MiB (147MB), run=2001-2001msec 00:32:42.737 WRITE: bw=70.3MiB/s (73.7MB/s), 70.3MiB/s-70.3MiB/s (73.7MB/s-73.7MB/s), io=141MiB (147MB), run=2001-2001msec 00:32:42.737 ----------------------------------------------------- 00:32:42.737 Suppressions used: 00:32:42.737 count bytes template 00:32:42.737 1 32 /usr/src/fio/parse.c 00:32:42.737 1 8 libtcmalloc_minimal.so 00:32:42.737 ----------------------------------------------------- 00:32:42.737 00:32:42.737 13:27:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:42.737 13:27:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:42.737 13:27:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:42.737 13:27:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:32:42.737 13:27:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:32:42.737 13:27:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:42.996 13:27:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:32:42.996 13:27:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:42.996 13:27:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:32:43.254 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:43.254 fio-3.35 00:32:43.254 Starting 1 thread 00:32:48.516 00:32:48.516 test: (groupid=0, jobs=1): err= 0: pid=66520: Fri Dec 6 13:27:41 2024 00:32:48.516 read: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2001msec) 00:32:48.516 slat (nsec): min=4970, max=71347, avg=6383.13, stdev=1593.25 00:32:48.516 clat (usec): min=224, max=12000, avg=3561.50, stdev=603.45 00:32:48.516 lat (usec): min=230, max=12006, avg=3567.88, stdev=604.11 00:32:48.516 clat percentiles (usec): 00:32:48.516 | 1.00th=[ 2212], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3195], 00:32:48.516 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3359], 60.00th=[ 3458], 00:32:48.516 | 70.00th=[ 3916], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4293], 00:32:48.516 | 99.00th=[ 5014], 99.50th=[ 6652], 99.90th=[ 8848], 99.95th=[ 9110], 00:32:48.516 | 99.99th=[11731] 00:32:48.516 bw ( KiB/s): min=64680, max=78328, per=100.00%, avg=72522.67, stdev=7048.41, samples=3 00:32:48.516 iops : min=16170, max=19582, avg=18130.67, stdev=1762.10, samples=3 00:32:48.516 write: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2001msec); 0 zone resets 00:32:48.516 slat (nsec): min=5092, max=58283, avg=6559.78, stdev=1623.32 00:32:48.516 clat (usec): min=269, max=12158, avg=3562.88, stdev=613.13 00:32:48.516 lat (usec): min=275, max=12163, avg=3569.44, stdev=613.79 00:32:48.516 clat percentiles (usec): 00:32:48.516 | 1.00th=[ 2212], 5.00th=[ 3032], 10.00th=[ 3130], 20.00th=[ 3195], 00:32:48.516 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3359], 60.00th=[ 3458], 00:32:48.516 | 70.00th=[ 3916], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4293], 00:32:48.516 | 99.00th=[ 5014], 99.50th=[ 6718], 99.90th=[ 8848], 99.95th=[10028], 00:32:48.516 | 99.99th=[11600] 00:32:48.516 bw ( KiB/s): min=65056, max=78256, per=100.00%, avg=72469.33, stdev=6748.67, samples=3 00:32:48.516 iops : min=16264, max=19564, avg=18117.33, stdev=1687.17, samples=3 00:32:48.516 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:32:48.516 lat (msec) : 2=0.62%, 4=74.33%, 10=24.95%, 20=0.04% 00:32:48.516 cpu : usr=99.30%, sys=0.00%, ctx=3, majf=0, minf=606 00:32:48.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:48.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:48.516 issued rwts: total=35792,35787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:48.516 00:32:48.516 Run status group 0 (all jobs): 00:32:48.516 READ: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2001-2001msec 00:32:48.517 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2001-2001msec 00:32:48.517 ----------------------------------------------------- 00:32:48.517 Suppressions used: 00:32:48.517 count bytes template 00:32:48.517 1 32 /usr/src/fio/parse.c 00:32:48.517 1 8 libtcmalloc_minimal.so 00:32:48.517 ----------------------------------------------------- 00:32:48.517 00:32:48.517 13:27:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:48.517 13:27:41 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:32:48.517 00:32:48.517 real 0m20.217s 00:32:48.517 user 0m14.911s 00:32:48.517 sys 0m5.743s 00:32:48.517 13:27:41 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:48.517 13:27:41 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:32:48.517 ************************************ 00:32:48.517 END TEST nvme_fio 00:32:48.517 ************************************ 00:32:48.517 00:32:48.517 real 1m38.964s 00:32:48.517 user 3m51.451s 00:32:48.517 sys 0m26.769s 00:32:48.517 ************************************ 00:32:48.517 END TEST nvme 00:32:48.517 ************************************ 00:32:48.517 13:27:41 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:48.517 13:27:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:48.777 13:27:41 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:32:48.777 13:27:41 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:48.777 13:27:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:48.777 13:27:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:48.777 13:27:41 -- common/autotest_common.sh@10 -- # set +x 00:32:48.777 ************************************ 00:32:48.777 START TEST nvme_scc 00:32:48.777 ************************************ 00:32:48.777 13:27:41 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:48.777 * Looking for test storage... 00:32:48.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:48.777 13:27:41 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:48.777 13:27:41 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:48.777 13:27:41 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:32:48.777 13:27:41 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@345 -- # : 1 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@368 -- # return 0 00:32:48.777 13:27:41 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:48.777 13:27:41 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:48.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.777 --rc genhtml_branch_coverage=1 00:32:48.777 --rc genhtml_function_coverage=1 00:32:48.777 --rc genhtml_legend=1 00:32:48.777 --rc geninfo_all_blocks=1 00:32:48.777 --rc geninfo_unexecuted_blocks=1 00:32:48.777 00:32:48.777 ' 00:32:48.777 13:27:41 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:48.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.777 --rc genhtml_branch_coverage=1 00:32:48.777 --rc genhtml_function_coverage=1 00:32:48.777 --rc genhtml_legend=1 00:32:48.777 --rc geninfo_all_blocks=1 00:32:48.777 --rc geninfo_unexecuted_blocks=1 00:32:48.777 00:32:48.777 ' 00:32:48.777 13:27:41 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:48.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.777 --rc genhtml_branch_coverage=1 00:32:48.777 --rc genhtml_function_coverage=1 00:32:48.777 --rc genhtml_legend=1 00:32:48.777 --rc geninfo_all_blocks=1 00:32:48.777 --rc geninfo_unexecuted_blocks=1 00:32:48.777 00:32:48.777 ' 00:32:48.777 13:27:41 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:48.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.777 --rc genhtml_branch_coverage=1 00:32:48.777 --rc genhtml_function_coverage=1 00:32:48.777 --rc genhtml_legend=1 00:32:48.777 --rc geninfo_all_blocks=1 00:32:48.777 --rc geninfo_unexecuted_blocks=1 00:32:48.777 00:32:48.777 ' 00:32:48.777 13:27:41 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:48.777 13:27:41 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:48.777 13:27:41 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:48.777 13:27:41 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:48.777 13:27:41 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.777 13:27:41 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.777 13:27:41 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.777 13:27:41 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.778 13:27:41 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.778 13:27:41 nvme_scc -- paths/export.sh@5 -- # export PATH 00:32:48.778 13:27:41 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.778 13:27:41 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:32:48.778 13:27:41 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:48.778 13:27:41 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:32:48.778 13:27:41 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:48.778 13:27:41 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:32:48.778 13:27:41 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:48.778 13:27:41 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:48.778 13:27:41 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:48.778 13:27:41 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:32:48.778 13:27:41 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:48.778 13:27:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:32:48.778 13:27:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:32:48.778 13:27:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:32:48.778 13:27:41 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:49.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:49.661 Waiting for block devices as requested 00:32:49.661 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:49.661 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:49.661 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:49.920 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:55.189 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:55.189 13:27:47 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:32:55.189 13:27:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:55.189 13:27:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:55.189 13:27:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:55.189 13:27:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.189 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:55.190 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.191 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.192 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.193 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.194 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:32:55.195 13:27:48 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:55.195 13:27:48 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:55.195 13:27:48 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:55.195 13:27:48 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:32:55.195 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:32:55.196 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:32:55.197 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.197 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.197 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.197 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:32:55.197 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:32:55.197 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.197 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.197 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.197 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:32:55.197 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.462 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.463 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.464 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.465 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:32:55.466 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:32:55.467 13:27:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:32:55.467 13:27:48 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:55.467 13:27:48 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:55.467 13:27:48 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:55.467 13:27:48 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:32:55.468 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:55.469 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.470 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.471 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.472 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.473 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:32:55.473 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:32:55.473 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.473 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.473 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.473 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:32:55.473 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.740 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:32:55.741 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.742 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.743 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:32:55.744 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.745 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:32:55.746 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:32:55.747 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:55.748 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:32:55.749 13:27:48 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:55.749 13:27:48 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:55.749 13:27:48 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:55.749 13:27:48 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:32:55.749 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:55.750 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:32:56.011 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.012 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:32:56.013 13:27:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:32:56.013 13:27:48 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:32:56.013 13:27:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:32:56.013 13:27:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:32:56.013 13:27:48 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:56.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:57.146 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:57.405 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:57.405 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:57.405 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:57.405 13:27:50 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:32:57.405 13:27:50 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:57.405 13:27:50 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.405 13:27:50 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:32:57.405 ************************************ 00:32:57.405 START TEST nvme_simple_copy 00:32:57.405 ************************************ 00:32:57.405 13:27:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:32:57.981 Initializing NVMe Controllers 00:32:57.981 Attaching to 0000:00:10.0 00:32:57.981 Controller supports SCC. Attached to 0000:00:10.0 00:32:57.981 Namespace ID: 1 size: 6GB 00:32:57.981 Initialization complete. 00:32:57.981 00:32:57.981 Controller QEMU NVMe Ctrl (12340 ) 00:32:57.981 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:32:57.981 Namespace Block Size:4096 00:32:57.981 Writing LBAs 0 to 63 with Random Data 00:32:57.981 Copied LBAs from 0 - 63 to the Destination LBA 256 00:32:57.981 LBAs matching Written Data: 64 00:32:57.981 00:32:57.981 real 0m0.373s 00:32:57.981 user 0m0.142s 00:32:57.981 sys 0m0.128s 00:32:57.981 ************************************ 00:32:57.981 END TEST nvme_simple_copy 00:32:57.981 ************************************ 00:32:57.981 13:27:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:57.981 13:27:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:32:57.981 00:32:57.981 real 0m9.224s 00:32:57.981 user 0m1.788s 00:32:57.981 sys 0m2.268s 00:32:57.981 13:27:50 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:57.981 13:27:50 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:32:57.981 ************************************ 00:32:57.981 END TEST nvme_scc 00:32:57.981 ************************************ 00:32:57.981 13:27:50 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:32:57.981 13:27:50 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:32:57.981 13:27:50 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:32:57.981 13:27:50 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:32:57.981 13:27:50 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:32:57.981 13:27:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:57.981 13:27:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.981 13:27:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.981 ************************************ 00:32:57.981 START TEST nvme_fdp 00:32:57.981 ************************************ 00:32:57.981 13:27:50 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:32:57.981 * Looking for test storage... 00:32:57.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:57.981 13:27:51 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:57.981 13:27:51 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:57.981 13:27:51 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:58.252 13:27:51 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:58.252 13:27:51 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:32:58.252 13:27:51 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:58.252 13:27:51 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.252 --rc genhtml_branch_coverage=1 00:32:58.252 --rc genhtml_function_coverage=1 00:32:58.252 --rc genhtml_legend=1 00:32:58.252 --rc geninfo_all_blocks=1 00:32:58.252 --rc geninfo_unexecuted_blocks=1 00:32:58.252 00:32:58.252 ' 00:32:58.252 13:27:51 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.252 --rc genhtml_branch_coverage=1 00:32:58.252 --rc genhtml_function_coverage=1 00:32:58.252 --rc genhtml_legend=1 00:32:58.252 --rc geninfo_all_blocks=1 00:32:58.252 --rc geninfo_unexecuted_blocks=1 00:32:58.252 00:32:58.252 ' 00:32:58.252 13:27:51 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.252 --rc genhtml_branch_coverage=1 00:32:58.252 --rc genhtml_function_coverage=1 00:32:58.253 --rc genhtml_legend=1 00:32:58.253 --rc geninfo_all_blocks=1 00:32:58.253 --rc geninfo_unexecuted_blocks=1 00:32:58.253 00:32:58.253 ' 00:32:58.253 13:27:51 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:58.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:58.253 --rc genhtml_branch_coverage=1 00:32:58.253 --rc genhtml_function_coverage=1 00:32:58.253 --rc genhtml_legend=1 00:32:58.253 --rc geninfo_all_blocks=1 00:32:58.253 --rc geninfo_unexecuted_blocks=1 00:32:58.253 00:32:58.253 ' 00:32:58.253 13:27:51 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:58.253 13:27:51 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:58.253 13:27:51 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:58.253 13:27:51 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:58.253 13:27:51 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:58.253 13:27:51 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.253 13:27:51 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.253 13:27:51 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.253 13:27:51 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:32:58.253 13:27:51 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:58.253 13:27:51 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:32:58.253 13:27:51 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:58.253 13:27:51 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:58.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:58.819 Waiting for block devices as requested 00:32:59.077 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:59.077 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:59.077 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:59.336 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:04.692 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:04.692 13:27:57 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:33:04.692 13:27:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:33:04.692 13:27:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:33:04.692 13:27:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:33:04.692 13:27:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.692 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:33:04.693 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:33:04.694 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.695 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.696 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.697 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:33:04.698 13:27:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:33:04.698 13:27:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:33:04.698 13:27:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:33:04.698 13:27:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:33:04.698 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.699 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:33:04.700 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.701 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:33:04.702 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:33:04.703 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.704 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.705 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:33:04.706 13:27:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:33:04.706 13:27:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:33:04.706 13:27:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:33:04.706 13:27:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.706 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:33:04.707 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:33:04.708 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:33:04.709 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.710 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:33:04.711 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.712 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.713 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.977 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:33:04.978 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.979 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.980 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.981 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.982 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:33:04.983 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:33:04.984 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:33:04.985 13:27:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:33:04.985 13:27:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:33:04.985 13:27:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:33:04.985 13:27:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.985 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.986 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:33:04.987 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:33:04.988 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:33:04.989 13:27:58 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:33:04.989 13:27:58 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:33:04.989 13:27:58 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:33:04.989 13:27:58 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:33:04.989 13:27:58 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:05.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:06.489 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:33:06.489 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:33:06.489 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:33:06.489 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:33:06.489 13:27:59 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:33:06.489 13:27:59 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:06.489 13:27:59 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:06.489 13:27:59 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:33:06.489 ************************************ 00:33:06.489 START TEST nvme_flexible_data_placement 00:33:06.489 ************************************ 00:33:06.489 13:27:59 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:33:07.056 Initializing NVMe Controllers 00:33:07.056 Attaching to 0000:00:13.0 00:33:07.056 Controller supports FDP Attached to 0000:00:13.0 00:33:07.056 Namespace ID: 1 Endurance Group ID: 1 00:33:07.056 Initialization complete. 00:33:07.056 00:33:07.056 ================================== 00:33:07.056 == FDP tests for Namespace: #01 == 00:33:07.056 ================================== 00:33:07.056 00:33:07.056 Get Feature: FDP: 00:33:07.056 ================= 00:33:07.056 Enabled: Yes 00:33:07.056 FDP configuration Index: 0 00:33:07.056 00:33:07.056 FDP configurations log page 00:33:07.056 =========================== 00:33:07.056 Number of FDP configurations: 1 00:33:07.056 Version: 0 00:33:07.056 Size: 112 00:33:07.056 FDP Configuration Descriptor: 0 00:33:07.056 Descriptor Size: 96 00:33:07.056 Reclaim Group Identifier format: 2 00:33:07.056 FDP Volatile Write Cache: Not Present 00:33:07.056 FDP Configuration: Valid 00:33:07.056 Vendor Specific Size: 0 00:33:07.056 Number of Reclaim Groups: 2 00:33:07.056 Number of Recalim Unit Handles: 8 00:33:07.056 Max Placement Identifiers: 128 00:33:07.056 Number of Namespaces Suppprted: 256 00:33:07.056 Reclaim unit Nominal Size: 6000000 bytes 00:33:07.056 Estimated Reclaim Unit Time Limit: Not Reported 00:33:07.056 RUH Desc #000: RUH Type: Initially Isolated 00:33:07.056 RUH Desc #001: RUH Type: Initially Isolated 00:33:07.056 RUH Desc #002: RUH Type: Initially Isolated 00:33:07.056 RUH Desc #003: RUH Type: Initially Isolated 00:33:07.056 RUH Desc #004: RUH Type: Initially Isolated 00:33:07.056 RUH Desc #005: RUH Type: Initially Isolated 00:33:07.056 RUH Desc #006: RUH Type: Initially Isolated 00:33:07.056 RUH Desc #007: RUH Type: Initially Isolated 00:33:07.056 00:33:07.056 FDP reclaim unit handle usage log page 00:33:07.056 ====================================== 00:33:07.056 Number of Reclaim Unit Handles: 8 00:33:07.056 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:33:07.056 RUH Usage Desc #001: RUH Attributes: Unused 00:33:07.056 RUH Usage Desc #002: RUH Attributes: Unused 00:33:07.056 RUH Usage Desc #003: RUH Attributes: Unused 00:33:07.056 RUH Usage Desc #004: RUH Attributes: Unused 00:33:07.056 RUH Usage Desc #005: RUH Attributes: Unused 00:33:07.056 RUH Usage Desc #006: RUH Attributes: Unused 00:33:07.056 RUH Usage Desc #007: RUH Attributes: Unused 00:33:07.056 00:33:07.056 FDP statistics log page 00:33:07.056 ======================= 00:33:07.056 Host bytes with metadata written: 759230464 00:33:07.056 Media bytes with metadata written: 759406592 00:33:07.056 Media bytes erased: 0 00:33:07.056 00:33:07.056 FDP Reclaim unit handle status 00:33:07.057 ============================== 00:33:07.057 Number of RUHS descriptors: 2 00:33:07.057 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002bf1 00:33:07.057 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:33:07.057 00:33:07.057 FDP write on placement id: 0 success 00:33:07.057 00:33:07.057 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:33:07.057 00:33:07.057 IO mgmt send: RUH update for Placement ID: #0 Success 00:33:07.057 00:33:07.057 Get Feature: FDP Events for Placement handle: #0 00:33:07.057 ======================== 00:33:07.057 Number of FDP Events: 6 00:33:07.057 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:33:07.057 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:33:07.057 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:33:07.057 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:33:07.057 FDP Event: #4 Type: Media Reallocated Enabled: No 00:33:07.057 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:33:07.057 00:33:07.057 FDP events log page 00:33:07.057 =================== 00:33:07.057 Number of FDP events: 1 00:33:07.057 FDP Event #0: 00:33:07.057 Event Type: RU Not Written to Capacity 00:33:07.057 Placement Identifier: Valid 00:33:07.057 NSID: Valid 00:33:07.057 Location: Valid 00:33:07.057 Placement Identifier: 0 00:33:07.057 Event Timestamp: c 00:33:07.057 Namespace Identifier: 1 00:33:07.057 Reclaim Group Identifier: 0 00:33:07.057 Reclaim Unit Handle Identifier: 0 00:33:07.057 00:33:07.057 FDP test passed 00:33:07.057 00:33:07.057 real 0m0.352s 00:33:07.057 user 0m0.123s 00:33:07.057 sys 0m0.127s 00:33:07.057 13:27:59 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.057 ************************************ 00:33:07.057 END TEST nvme_flexible_data_placement 00:33:07.057 ************************************ 00:33:07.057 13:27:59 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:33:07.057 00:33:07.057 real 0m9.045s 00:33:07.057 user 0m1.659s 00:33:07.057 sys 0m2.398s 00:33:07.057 13:27:59 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.057 ************************************ 00:33:07.057 END TEST nvme_fdp 00:33:07.057 ************************************ 00:33:07.057 13:27:59 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:33:07.057 13:28:00 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:33:07.057 13:28:00 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:33:07.057 13:28:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:07.057 13:28:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.057 13:28:00 -- common/autotest_common.sh@10 -- # set +x 00:33:07.057 ************************************ 00:33:07.057 START TEST nvme_rpc 00:33:07.057 ************************************ 00:33:07.057 13:28:00 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:33:07.057 * Looking for test storage... 00:33:07.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:33:07.057 13:28:00 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:07.057 13:28:00 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:07.057 13:28:00 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:07.316 13:28:00 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:07.317 13:28:00 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:07.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.317 --rc genhtml_branch_coverage=1 00:33:07.317 --rc genhtml_function_coverage=1 00:33:07.317 --rc genhtml_legend=1 00:33:07.317 --rc geninfo_all_blocks=1 00:33:07.317 --rc geninfo_unexecuted_blocks=1 00:33:07.317 00:33:07.317 ' 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:07.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.317 --rc genhtml_branch_coverage=1 00:33:07.317 --rc genhtml_function_coverage=1 00:33:07.317 --rc genhtml_legend=1 00:33:07.317 --rc geninfo_all_blocks=1 00:33:07.317 --rc geninfo_unexecuted_blocks=1 00:33:07.317 00:33:07.317 ' 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:07.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.317 --rc genhtml_branch_coverage=1 00:33:07.317 --rc genhtml_function_coverage=1 00:33:07.317 --rc genhtml_legend=1 00:33:07.317 --rc geninfo_all_blocks=1 00:33:07.317 --rc geninfo_unexecuted_blocks=1 00:33:07.317 00:33:07.317 ' 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:07.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.317 --rc genhtml_branch_coverage=1 00:33:07.317 --rc genhtml_function_coverage=1 00:33:07.317 --rc genhtml_legend=1 00:33:07.317 --rc geninfo_all_blocks=1 00:33:07.317 --rc geninfo_unexecuted_blocks=1 00:33:07.317 00:33:07.317 ' 00:33:07.317 13:28:00 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:07.317 13:28:00 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:33:07.317 13:28:00 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:33:07.317 13:28:00 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67928 00:33:07.317 13:28:00 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:33:07.317 13:28:00 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:33:07.317 13:28:00 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67928 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67928 ']' 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.317 13:28:00 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:07.576 [2024-12-06 13:28:00.508965] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:33:07.577 [2024-12-06 13:28:00.509168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67928 ] 00:33:07.859 [2024-12-06 13:28:00.719674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:07.859 [2024-12-06 13:28:00.919798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.859 [2024-12-06 13:28:00.919836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.242 13:28:02 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.242 13:28:02 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:09.242 13:28:02 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:33:09.500 Nvme0n1 00:33:09.500 13:28:02 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:33:09.500 13:28:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:33:09.757 request: 00:33:09.757 { 00:33:09.757 "bdev_name": "Nvme0n1", 00:33:09.757 "filename": "non_existing_file", 00:33:09.757 "method": "bdev_nvme_apply_firmware", 00:33:09.757 "req_id": 1 00:33:09.757 } 00:33:09.757 Got JSON-RPC error response 00:33:09.757 response: 00:33:09.757 { 00:33:09.757 "code": -32603, 00:33:09.757 "message": "open file failed." 00:33:09.757 } 00:33:09.757 13:28:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:33:09.757 13:28:02 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:33:09.757 13:28:02 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:10.015 13:28:03 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:33:10.015 13:28:03 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67928 00:33:10.015 13:28:03 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67928 ']' 00:33:10.015 13:28:03 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67928 00:33:10.015 13:28:03 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:33:10.015 13:28:03 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.015 13:28:03 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67928 00:33:10.015 killing process with pid 67928 00:33:10.015 13:28:03 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:10.015 13:28:03 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:10.015 13:28:03 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67928' 00:33:10.015 13:28:03 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67928 00:33:10.015 13:28:03 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67928 00:33:13.415 ************************************ 00:33:13.415 END TEST nvme_rpc 00:33:13.415 ************************************ 00:33:13.415 00:33:13.415 real 0m5.709s 00:33:13.415 user 0m10.675s 00:33:13.415 sys 0m1.035s 00:33:13.415 13:28:05 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:13.415 13:28:05 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:13.415 13:28:05 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:33:13.415 13:28:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:13.415 13:28:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:13.415 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:33:13.415 ************************************ 00:33:13.415 START TEST nvme_rpc_timeouts 00:33:13.415 ************************************ 00:33:13.415 13:28:05 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:33:13.415 * Looking for test storage... 00:33:13.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:33:13.415 13:28:05 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:13.415 13:28:05 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:33:13.415 13:28:05 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:33:13.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:13.415 13:28:06 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:13.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.415 --rc genhtml_branch_coverage=1 00:33:13.415 --rc genhtml_function_coverage=1 00:33:13.415 --rc genhtml_legend=1 00:33:13.415 --rc geninfo_all_blocks=1 00:33:13.415 --rc geninfo_unexecuted_blocks=1 00:33:13.415 00:33:13.415 ' 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:13.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.415 --rc genhtml_branch_coverage=1 00:33:13.415 --rc genhtml_function_coverage=1 00:33:13.415 --rc genhtml_legend=1 00:33:13.415 --rc geninfo_all_blocks=1 00:33:13.415 --rc geninfo_unexecuted_blocks=1 00:33:13.415 00:33:13.415 ' 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:13.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.415 --rc genhtml_branch_coverage=1 00:33:13.415 --rc genhtml_function_coverage=1 00:33:13.415 --rc genhtml_legend=1 00:33:13.415 --rc geninfo_all_blocks=1 00:33:13.415 --rc geninfo_unexecuted_blocks=1 00:33:13.415 00:33:13.415 ' 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:13.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.415 --rc genhtml_branch_coverage=1 00:33:13.415 --rc genhtml_function_coverage=1 00:33:13.415 --rc genhtml_legend=1 00:33:13.415 --rc geninfo_all_blocks=1 00:33:13.415 --rc geninfo_unexecuted_blocks=1 00:33:13.415 00:33:13.415 ' 00:33:13.415 13:28:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:13.415 13:28:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_68015 00:33:13.415 13:28:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_68015 00:33:13.415 13:28:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=68053 00:33:13.415 13:28:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:33:13.415 13:28:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 68053 00:33:13.415 13:28:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 68053 ']' 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:13.415 13:28:06 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:33:13.415 [2024-12-06 13:28:06.180900] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:33:13.415 [2024-12-06 13:28:06.181422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68053 ] 00:33:13.415 [2024-12-06 13:28:06.382750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:13.674 [2024-12-06 13:28:06.538043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.674 [2024-12-06 13:28:06.538080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.611 13:28:07 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.611 13:28:07 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:33:14.611 13:28:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:33:14.611 Checking default timeout settings: 00:33:14.611 13:28:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:33:15.178 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:33:15.178 Making settings changes with rpc: 00:33:15.178 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:33:15.437 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:33:15.437 Check default vs. modified settings: 00:33:15.437 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:33:15.696 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:33:15.696 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_68015 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_68015 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:15.956 Setting action_on_timeout is changed as expected. 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_68015 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_68015 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:15.956 Setting timeout_us is changed as expected. 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_68015 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_68015 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:15.956 Setting timeout_admin_us is changed as expected. 00:33:15.956 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:33:15.957 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:33:15.957 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:33:15.957 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:33:15.957 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_68015 /tmp/settings_modified_68015 00:33:15.957 13:28:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 68053 00:33:15.957 13:28:08 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 68053 ']' 00:33:15.957 13:28:08 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 68053 00:33:15.957 13:28:08 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:33:15.957 13:28:08 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:15.957 13:28:08 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68053 00:33:15.957 killing process with pid 68053 00:33:15.957 13:28:08 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:15.957 13:28:08 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:15.957 13:28:08 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68053' 00:33:15.957 13:28:08 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 68053 00:33:15.957 13:28:08 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 68053 00:33:19.250 RPC TIMEOUT SETTING TEST PASSED. 00:33:19.250 13:28:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:33:19.250 00:33:19.250 real 0m5.957s 00:33:19.250 user 0m11.268s 00:33:19.250 sys 0m1.018s 00:33:19.250 ************************************ 00:33:19.250 END TEST nvme_rpc_timeouts 00:33:19.250 ************************************ 00:33:19.250 13:28:11 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:19.250 13:28:11 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:33:19.250 13:28:11 -- spdk/autotest.sh@239 -- # uname -s 00:33:19.250 13:28:11 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:33:19.250 13:28:11 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:33:19.250 13:28:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:19.250 13:28:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:19.250 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:33:19.250 ************************************ 00:33:19.250 START TEST sw_hotplug 00:33:19.250 ************************************ 00:33:19.250 13:28:11 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:33:19.250 * Looking for test storage... 00:33:19.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:33:19.250 13:28:11 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:19.250 13:28:11 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:19.250 13:28:11 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:33:19.250 13:28:12 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:19.250 13:28:12 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:33:19.250 13:28:12 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.250 13:28:12 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:19.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.251 --rc genhtml_branch_coverage=1 00:33:19.251 --rc genhtml_function_coverage=1 00:33:19.251 --rc genhtml_legend=1 00:33:19.251 --rc geninfo_all_blocks=1 00:33:19.251 --rc geninfo_unexecuted_blocks=1 00:33:19.251 00:33:19.251 ' 00:33:19.251 13:28:12 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:19.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.251 --rc genhtml_branch_coverage=1 00:33:19.251 --rc genhtml_function_coverage=1 00:33:19.251 --rc genhtml_legend=1 00:33:19.251 --rc geninfo_all_blocks=1 00:33:19.251 --rc geninfo_unexecuted_blocks=1 00:33:19.251 00:33:19.251 ' 00:33:19.251 13:28:12 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:19.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.251 --rc genhtml_branch_coverage=1 00:33:19.251 --rc genhtml_function_coverage=1 00:33:19.251 --rc genhtml_legend=1 00:33:19.251 --rc geninfo_all_blocks=1 00:33:19.251 --rc geninfo_unexecuted_blocks=1 00:33:19.251 00:33:19.251 ' 00:33:19.251 13:28:12 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:19.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.251 --rc genhtml_branch_coverage=1 00:33:19.251 --rc genhtml_function_coverage=1 00:33:19.251 --rc genhtml_legend=1 00:33:19.251 --rc geninfo_all_blocks=1 00:33:19.251 --rc geninfo_unexecuted_blocks=1 00:33:19.251 00:33:19.251 ' 00:33:19.251 13:28:12 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:19.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:19.510 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:33:19.510 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:33:19.510 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:33:19.510 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:33:19.769 13:28:12 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:33:19.769 13:28:12 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:33:19.769 13:28:12 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:33:19.769 13:28:12 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@233 -- # local class 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:33:19.769 13:28:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:33:19.770 13:28:12 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:33:19.770 13:28:12 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:33:19.770 13:28:12 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:33:19.770 13:28:12 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:20.028 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:20.286 Waiting for block devices as requested 00:33:20.546 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:20.546 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:20.546 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:20.806 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:26.177 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:26.177 13:28:18 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:33:26.177 13:28:18 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:26.465 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:33:26.465 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:26.465 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:33:26.724 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:33:26.982 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:33:26.982 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:33:27.239 13:28:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68939 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:33:27.239 13:28:20 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:33:27.239 13:28:20 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:33:27.239 13:28:20 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:33:27.239 13:28:20 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:33:27.239 13:28:20 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:33:27.239 13:28:20 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:33:27.496 Initializing NVMe Controllers 00:33:27.496 Attaching to 0000:00:10.0 00:33:27.496 Attaching to 0000:00:11.0 00:33:27.496 Attached to 0000:00:10.0 00:33:27.496 Attached to 0000:00:11.0 00:33:27.496 Initialization complete. Starting I/O... 00:33:27.496 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:33:27.496 QEMU NVMe Ctrl (12341 ): 2 I/Os completed (+2) 00:33:27.496 00:33:28.429 QEMU NVMe Ctrl (12340 ): 1168 I/Os completed (+1168) 00:33:28.429 QEMU NVMe Ctrl (12341 ): 1173 I/Os completed (+1171) 00:33:28.429 00:33:29.817 QEMU NVMe Ctrl (12340 ): 2672 I/Os completed (+1504) 00:33:29.817 QEMU NVMe Ctrl (12341 ): 2681 I/Os completed (+1508) 00:33:29.817 00:33:30.776 QEMU NVMe Ctrl (12340 ): 4468 I/Os completed (+1796) 00:33:30.776 QEMU NVMe Ctrl (12341 ): 4500 I/Os completed (+1819) 00:33:30.776 00:33:31.713 QEMU NVMe Ctrl (12340 ): 6228 I/Os completed (+1760) 00:33:31.713 QEMU NVMe Ctrl (12341 ): 6267 I/Os completed (+1767) 00:33:31.713 00:33:32.648 QEMU NVMe Ctrl (12340 ): 7992 I/Os completed (+1764) 00:33:32.648 QEMU NVMe Ctrl (12341 ): 8034 I/Os completed (+1767) 00:33:32.648 00:33:33.215 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:33.215 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:33.215 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:33.215 [2024-12-06 13:28:26.255354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:33.215 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:33.215 [2024-12-06 13:28:26.258416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.258618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.258655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.258684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:33.215 [2024-12-06 13:28:26.262501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.262568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.262591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.262615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:33.215 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:33.215 [2024-12-06 13:28:26.286604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:33.215 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:33.215 [2024-12-06 13:28:26.291848] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.292027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.292152] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.292187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:33.215 [2024-12-06 13:28:26.295487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.295540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.295568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 [2024-12-06 13:28:26.295589] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:33.215 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:33.215 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:33.473 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:33.473 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:33.473 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:33.473 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:33.473 00:33:33.473 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:33.473 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:33.473 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:33.473 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:33.473 Attaching to 0000:00:10.0 00:33:33.473 Attached to 0000:00:10.0 00:33:33.731 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:33.731 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:33.731 13:28:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:33.731 Attaching to 0000:00:11.0 00:33:33.731 Attached to 0000:00:11.0 00:33:34.665 QEMU NVMe Ctrl (12340 ): 1580 I/Os completed (+1580) 00:33:34.665 QEMU NVMe Ctrl (12341 ): 1443 I/Os completed (+1443) 00:33:34.665 00:33:35.601 QEMU NVMe Ctrl (12340 ): 3324 I/Os completed (+1744) 00:33:35.601 QEMU NVMe Ctrl (12341 ): 3187 I/Os completed (+1744) 00:33:35.601 00:33:36.537 QEMU NVMe Ctrl (12340 ): 5044 I/Os completed (+1720) 00:33:36.537 QEMU NVMe Ctrl (12341 ): 4926 I/Os completed (+1739) 00:33:36.537 00:33:37.469 QEMU NVMe Ctrl (12340 ): 6819 I/Os completed (+1775) 00:33:37.469 QEMU NVMe Ctrl (12341 ): 6709 I/Os completed (+1783) 00:33:37.469 00:33:38.842 QEMU NVMe Ctrl (12340 ): 8579 I/Os completed (+1760) 00:33:38.842 QEMU NVMe Ctrl (12341 ): 8474 I/Os completed (+1765) 00:33:38.842 00:33:39.780 QEMU NVMe Ctrl (12340 ): 10351 I/Os completed (+1772) 00:33:39.780 QEMU NVMe Ctrl (12341 ): 10256 I/Os completed (+1782) 00:33:39.780 00:33:40.718 QEMU NVMe Ctrl (12340 ): 12083 I/Os completed (+1732) 00:33:40.718 QEMU NVMe Ctrl (12341 ): 11988 I/Os completed (+1732) 00:33:40.718 00:33:41.655 QEMU NVMe Ctrl (12340 ): 13639 I/Os completed (+1556) 00:33:41.655 QEMU NVMe Ctrl (12341 ): 13553 I/Os completed (+1565) 00:33:41.655 00:33:42.620 QEMU NVMe Ctrl (12340 ): 15399 I/Os completed (+1760) 00:33:42.620 QEMU NVMe Ctrl (12341 ): 15313 I/Os completed (+1760) 00:33:42.620 00:33:43.567 QEMU NVMe Ctrl (12340 ): 17039 I/Os completed (+1640) 00:33:43.567 QEMU NVMe Ctrl (12341 ): 16956 I/Os completed (+1643) 00:33:43.567 00:33:44.504 QEMU NVMe Ctrl (12340 ): 18859 I/Os completed (+1820) 00:33:44.504 QEMU NVMe Ctrl (12341 ): 18779 I/Os completed (+1823) 00:33:44.504 00:33:45.439 QEMU NVMe Ctrl (12340 ): 20667 I/Os completed (+1808) 00:33:45.439 QEMU NVMe Ctrl (12341 ): 20587 I/Os completed (+1808) 00:33:45.439 00:33:45.697 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:45.697 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:45.697 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:45.697 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:45.697 [2024-12-06 13:28:38.630290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:45.697 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:45.697 [2024-12-06 13:28:38.632536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.632607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.632634] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.632666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:45.697 [2024-12-06 13:28:38.636139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.636198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.636221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.636246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:33:45.697 EAL: Scan for (pci) bus failed. 00:33:45.697 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:45.697 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:45.697 [2024-12-06 13:28:38.667565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:45.697 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:45.697 [2024-12-06 13:28:38.669486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.669539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.669572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.669594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:45.697 [2024-12-06 13:28:38.672732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.672806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.672833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 [2024-12-06 13:28:38.672857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:45.697 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:45.697 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:45.697 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:33:45.697 EAL: Scan for (pci) bus failed. 00:33:45.955 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:45.955 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:45.955 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:45.955 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:45.955 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:45.955 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:45.955 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:45.955 13:28:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:45.955 Attaching to 0000:00:10.0 00:33:45.955 Attached to 0000:00:10.0 00:33:45.955 13:28:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:45.955 13:28:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:45.955 13:28:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:45.955 Attaching to 0000:00:11.0 00:33:45.955 Attached to 0000:00:11.0 00:33:46.521 QEMU NVMe Ctrl (12340 ): 1032 I/Os completed (+1032) 00:33:46.521 QEMU NVMe Ctrl (12341 ): 857 I/Os completed (+857) 00:33:46.521 00:33:47.452 QEMU NVMe Ctrl (12340 ): 2746 I/Os completed (+1714) 00:33:47.452 QEMU NVMe Ctrl (12341 ): 2592 I/Os completed (+1735) 00:33:47.452 00:33:48.826 QEMU NVMe Ctrl (12340 ): 4606 I/Os completed (+1860) 00:33:48.826 QEMU NVMe Ctrl (12341 ): 4452 I/Os completed (+1860) 00:33:48.826 00:33:49.759 QEMU NVMe Ctrl (12340 ): 6378 I/Os completed (+1772) 00:33:49.759 QEMU NVMe Ctrl (12341 ): 6225 I/Os completed (+1773) 00:33:49.759 00:33:50.697 QEMU NVMe Ctrl (12340 ): 7894 I/Os completed (+1516) 00:33:50.697 QEMU NVMe Ctrl (12341 ): 7763 I/Os completed (+1538) 00:33:50.697 00:33:51.634 QEMU NVMe Ctrl (12340 ): 9638 I/Os completed (+1744) 00:33:51.634 QEMU NVMe Ctrl (12341 ): 9517 I/Os completed (+1754) 00:33:51.634 00:33:52.576 QEMU NVMe Ctrl (12340 ): 11295 I/Os completed (+1657) 00:33:52.576 QEMU NVMe Ctrl (12341 ): 11195 I/Os completed (+1678) 00:33:52.576 00:33:53.513 QEMU NVMe Ctrl (12340 ): 12923 I/Os completed (+1628) 00:33:53.513 QEMU NVMe Ctrl (12341 ): 12823 I/Os completed (+1628) 00:33:53.513 00:33:54.452 QEMU NVMe Ctrl (12340 ): 14455 I/Os completed (+1532) 00:33:54.452 QEMU NVMe Ctrl (12341 ): 14381 I/Os completed (+1558) 00:33:54.452 00:33:55.830 QEMU NVMe Ctrl (12340 ): 16143 I/Os completed (+1688) 00:33:55.830 QEMU NVMe Ctrl (12341 ): 16071 I/Os completed (+1690) 00:33:55.830 00:33:56.767 QEMU NVMe Ctrl (12340 ): 17895 I/Os completed (+1752) 00:33:56.767 QEMU NVMe Ctrl (12341 ): 17834 I/Os completed (+1763) 00:33:56.767 00:33:57.703 QEMU NVMe Ctrl (12340 ): 19767 I/Os completed (+1872) 00:33:57.703 QEMU NVMe Ctrl (12341 ): 19706 I/Os completed (+1872) 00:33:57.703 00:33:57.961 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:57.961 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:57.961 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:57.961 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:57.961 [2024-12-06 13:28:51.041218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:57.961 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:57.961 [2024-12-06 13:28:51.043269] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:57.961 [2024-12-06 13:28:51.043354] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:57.961 [2024-12-06 13:28:51.043381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:57.961 [2024-12-06 13:28:51.043407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:57.961 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:57.962 [2024-12-06 13:28:51.046770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:57.962 [2024-12-06 13:28:51.046828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:57.962 [2024-12-06 13:28:51.046850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:57.962 [2024-12-06 13:28:51.046873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:58.220 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:58.220 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:58.220 [2024-12-06 13:28:51.082163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:58.220 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:58.220 [2024-12-06 13:28:51.084489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:58.220 [2024-12-06 13:28:51.084561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:58.220 [2024-12-06 13:28:51.084592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:58.220 [2024-12-06 13:28:51.084617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:58.220 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:58.220 [2024-12-06 13:28:51.088212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:58.220 [2024-12-06 13:28:51.088268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:58.220 [2024-12-06 13:28:51.088301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:58.220 [2024-12-06 13:28:51.088324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:58.220 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:58.220 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:58.220 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:58.220 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:58.220 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:58.478 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:58.478 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:58.478 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:58.478 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:58.478 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:58.478 Attaching to 0000:00:10.0 00:33:58.478 Attached to 0000:00:10.0 00:33:58.478 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:58.478 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:58.478 13:28:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:58.478 Attaching to 0000:00:11.0 00:33:58.478 Attached to 0000:00:11.0 00:33:58.478 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:58.478 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:58.478 [2024-12-06 13:28:51.500854] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:34:10.678 13:29:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:34:10.678 13:29:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:10.678 13:29:03 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.24 00:34:10.678 13:29:03 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.24 00:34:10.678 13:29:03 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:34:10.678 13:29:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.24 00:34:10.678 13:29:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.24 2 00:34:10.678 remove_attach_helper took 43.24s to complete (handling 2 nvme drive(s)) 13:29:03 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:34:17.235 13:29:09 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68939 00:34:17.235 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68939) - No such process 00:34:17.235 13:29:09 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68939 00:34:17.235 13:29:09 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:34:17.235 13:29:09 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:34:17.235 13:29:09 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:34:17.235 13:29:09 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69474 00:34:17.235 13:29:09 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:34:17.235 13:29:09 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:17.235 13:29:09 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69474 00:34:17.235 13:29:09 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69474 ']' 00:34:17.235 13:29:09 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.235 13:29:09 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.235 13:29:09 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.235 13:29:09 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.235 13:29:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:17.235 [2024-12-06 13:29:09.653479] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:34:17.235 [2024-12-06 13:29:09.653703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69474 ] 00:34:17.235 [2024-12-06 13:29:09.842490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.235 [2024-12-06 13:29:10.002353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.169 13:29:11 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:18.169 13:29:11 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:34:18.169 13:29:11 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:34:18.169 13:29:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.169 13:29:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:18.169 13:29:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.169 13:29:11 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:34:18.169 13:29:11 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:34:18.169 13:29:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:34:18.169 13:29:11 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:34:18.170 13:29:11 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:34:18.170 13:29:11 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:34:18.170 13:29:11 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:34:18.170 13:29:11 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:34:18.170 13:29:11 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:34:18.170 13:29:11 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:34:18.170 13:29:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:34:18.170 13:29:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:34:18.170 13:29:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:24.737 13:29:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.737 13:29:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:24.737 [2024-12-06 13:29:17.247113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:24.737 [2024-12-06 13:29:17.250084] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:24.737 [2024-12-06 13:29:17.250136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.737 [2024-12-06 13:29:17.250160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.737 [2024-12-06 13:29:17.250193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:24.737 [2024-12-06 13:29:17.250205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.737 [2024-12-06 13:29:17.250222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.737 [2024-12-06 13:29:17.250236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:24.737 [2024-12-06 13:29:17.250250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.737 [2024-12-06 13:29:17.250262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.737 [2024-12-06 13:29:17.250284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:24.737 [2024-12-06 13:29:17.250296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.737 [2024-12-06 13:29:17.250311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.737 13:29:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:34:24.737 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:24.737 [2024-12-06 13:29:17.647196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:24.737 [2024-12-06 13:29:17.650239] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:24.737 [2024-12-06 13:29:17.650287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.737 [2024-12-06 13:29:17.650311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.737 [2024-12-06 13:29:17.650345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:24.737 [2024-12-06 13:29:17.650360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.737 [2024-12-06 13:29:17.650374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.737 [2024-12-06 13:29:17.650392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:24.737 [2024-12-06 13:29:17.650417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.737 [2024-12-06 13:29:17.650433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.738 [2024-12-06 13:29:17.650446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:24.738 [2024-12-06 13:29:17.650461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.738 [2024-12-06 13:29:17.650473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.738 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:34:24.738 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:24.738 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:24.738 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:24.738 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:24.738 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:24.738 13:29:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.738 13:29:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:24.738 13:29:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.738 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:24.738 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:24.996 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:24.996 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:24.996 13:29:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:24.996 13:29:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:24.996 13:29:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:24.996 13:29:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:24.996 13:29:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:24.996 13:29:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:25.254 13:29:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:25.254 13:29:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:25.254 13:29:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:37.466 13:29:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.466 13:29:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:37.466 13:29:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:37.466 [2024-12-06 13:29:30.247430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:37.466 [2024-12-06 13:29:30.251149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:37.466 [2024-12-06 13:29:30.251209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.466 [2024-12-06 13:29:30.251231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.466 [2024-12-06 13:29:30.251265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:37.466 [2024-12-06 13:29:30.251279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.466 [2024-12-06 13:29:30.251297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.466 [2024-12-06 13:29:30.251313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:37.466 [2024-12-06 13:29:30.251330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.466 [2024-12-06 13:29:30.251354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.466 [2024-12-06 13:29:30.251375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:37.466 [2024-12-06 13:29:30.251389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.466 [2024-12-06 13:29:30.251428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:37.466 13:29:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.466 13:29:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:37.466 13:29:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:37.466 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:37.724 [2024-12-06 13:29:30.747457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:37.725 [2024-12-06 13:29:30.750595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:37.725 [2024-12-06 13:29:30.750642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.725 [2024-12-06 13:29:30.750668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.725 [2024-12-06 13:29:30.750697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:37.725 [2024-12-06 13:29:30.750714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.725 [2024-12-06 13:29:30.750726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.725 [2024-12-06 13:29:30.750744] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:37.725 [2024-12-06 13:29:30.750757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.725 [2024-12-06 13:29:30.750773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.725 [2024-12-06 13:29:30.750786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:37.725 [2024-12-06 13:29:30.750801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:37.725 [2024-12-06 13:29:30.750813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:37.983 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:37.983 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:37.983 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:37.983 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:37.983 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:37.983 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:37.983 13:29:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.983 13:29:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:37.983 13:29:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.983 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:37.983 13:29:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:37.983 13:29:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:37.983 13:29:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:37.983 13:29:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:38.242 13:29:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:38.242 13:29:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:38.242 13:29:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:38.242 13:29:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:38.242 13:29:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:38.242 13:29:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:38.242 13:29:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:38.242 13:29:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:50.446 13:29:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.446 13:29:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:50.446 13:29:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:50.446 [2024-12-06 13:29:43.347771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:50.446 [2024-12-06 13:29:43.351321] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.446 [2024-12-06 13:29:43.351381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.446 [2024-12-06 13:29:43.351432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.446 [2024-12-06 13:29:43.351467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.446 [2024-12-06 13:29:43.351480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.446 [2024-12-06 13:29:43.351501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.446 [2024-12-06 13:29:43.351516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.446 [2024-12-06 13:29:43.351533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.446 [2024-12-06 13:29:43.351546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.446 [2024-12-06 13:29:43.351564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:50.446 [2024-12-06 13:29:43.351577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:50.446 [2024-12-06 13:29:43.351593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:50.446 13:29:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.446 13:29:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:50.446 13:29:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:50.446 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:51.011 [2024-12-06 13:29:43.847788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:51.011 [2024-12-06 13:29:43.850953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.011 [2024-12-06 13:29:43.851018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.011 [2024-12-06 13:29:43.851041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.011 [2024-12-06 13:29:43.851070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.011 [2024-12-06 13:29:43.851086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.011 [2024-12-06 13:29:43.851099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.011 [2024-12-06 13:29:43.851117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.011 [2024-12-06 13:29:43.851129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.011 [2024-12-06 13:29:43.851148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.011 [2024-12-06 13:29:43.851162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:51.011 [2024-12-06 13:29:43.851177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.011 [2024-12-06 13:29:43.851190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.011 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:51.011 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:51.011 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:51.011 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:51.011 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:51.011 13:29:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.011 13:29:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:51.011 13:29:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:51.011 13:29:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.011 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:51.011 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:51.269 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:51.269 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:51.269 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:51.269 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:51.269 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:51.269 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:51.269 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:51.269 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:51.269 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:51.269 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:51.269 13:29:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.25 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.25 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.25 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.25 2 00:35:03.465 remove_attach_helper took 45.25s to complete (handling 2 nvme drive(s)) 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:35:03.465 13:29:56 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:35:03.465 13:29:56 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:10.030 13:30:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:10.030 13:30:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:10.030 [2024-12-06 13:30:02.535457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:35:10.030 [2024-12-06 13:30:02.537325] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:10.030 [2024-12-06 13:30:02.537378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.030 [2024-12-06 13:30:02.537411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.030 [2024-12-06 13:30:02.537440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:10.030 [2024-12-06 13:30:02.537453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.030 [2024-12-06 13:30:02.537469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.030 [2024-12-06 13:30:02.537482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:10.030 [2024-12-06 13:30:02.537497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.030 [2024-12-06 13:30:02.537509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.030 [2024-12-06 13:30:02.537525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:10.030 [2024-12-06 13:30:02.537536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.030 [2024-12-06 13:30:02.537554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.030 13:30:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:35:10.030 13:30:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:35:10.030 [2024-12-06 13:30:02.935489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:35:10.030 [2024-12-06 13:30:02.938245] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:10.030 [2024-12-06 13:30:02.938288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.030 [2024-12-06 13:30:02.938308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.030 [2024-12-06 13:30:02.938332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:10.030 [2024-12-06 13:30:02.938348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.030 [2024-12-06 13:30:02.938361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.030 [2024-12-06 13:30:02.938377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:10.030 [2024-12-06 13:30:02.938389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.030 [2024-12-06 13:30:02.938420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.030 [2024-12-06 13:30:02.938434] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:10.030 [2024-12-06 13:30:02.938448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.030 [2024-12-06 13:30:02.938461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.030 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:35:10.030 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:10.030 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:10.030 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:10.030 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:10.030 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:10.030 13:30:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.030 13:30:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:10.030 13:30:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.289 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:35:10.289 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:35:10.289 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:10.289 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:10.289 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:35:10.289 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:35:10.549 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:10.549 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:10.549 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:10.549 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:35:10.549 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:35:10.549 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:10.549 13:30:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:22.767 13:30:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.767 13:30:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:22.767 13:30:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:22.767 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:22.767 13:30:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.767 13:30:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:22.767 [2024-12-06 13:30:15.635806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:35:22.767 [2024-12-06 13:30:15.638247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:22.767 [2024-12-06 13:30:15.638445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.767 [2024-12-06 13:30:15.638579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.767 [2024-12-06 13:30:15.638766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:22.768 [2024-12-06 13:30:15.638877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.768 [2024-12-06 13:30:15.639048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.768 [2024-12-06 13:30:15.639174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:22.768 [2024-12-06 13:30:15.639323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.768 [2024-12-06 13:30:15.639461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.768 [2024-12-06 13:30:15.639619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:22.768 [2024-12-06 13:30:15.639724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:22.768 [2024-12-06 13:30:15.639805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:22.768 13:30:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.768 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:35:22.768 13:30:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:35:23.027 [2024-12-06 13:30:16.035822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:35:23.027 [2024-12-06 13:30:16.038985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:23.027 [2024-12-06 13:30:16.039160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.027 [2024-12-06 13:30:16.039357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.027 [2024-12-06 13:30:16.039564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:23.027 [2024-12-06 13:30:16.039616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.027 [2024-12-06 13:30:16.039725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.027 [2024-12-06 13:30:16.039792] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:23.027 [2024-12-06 13:30:16.039874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.027 [2024-12-06 13:30:16.039987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.027 [2024-12-06 13:30:16.040093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:23.027 [2024-12-06 13:30:16.040136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.027 [2024-12-06 13:30:16.040368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.285 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:35:23.285 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:23.285 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:23.285 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:23.285 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:23.285 13:30:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.285 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:23.285 13:30:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:23.285 13:30:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.285 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:35:23.285 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:35:23.285 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:23.285 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:23.285 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:35:23.544 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:35:23.544 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:23.544 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:23.544 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:23.544 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:35:23.544 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:35:23.544 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:23.544 13:30:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:35.761 13:30:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.761 13:30:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:35.761 13:30:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:35.761 13:30:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.761 13:30:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:35.761 [2024-12-06 13:30:28.736067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:35:35.761 [2024-12-06 13:30:28.738355] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:35.761 [2024-12-06 13:30:28.738423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:35.761 [2024-12-06 13:30:28.738446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.761 [2024-12-06 13:30:28.738478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:35.761 [2024-12-06 13:30:28.738492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:35.761 [2024-12-06 13:30:28.738514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.761 [2024-12-06 13:30:28.738530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:35.761 [2024-12-06 13:30:28.738551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:35.761 [2024-12-06 13:30:28.738565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.761 [2024-12-06 13:30:28.738583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:35.761 [2024-12-06 13:30:28.738596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:35.761 [2024-12-06 13:30:28.738613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.761 13:30:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:35:35.761 13:30:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:35:36.329 [2024-12-06 13:30:29.136066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:35:36.329 [2024-12-06 13:30:29.138283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:36.329 [2024-12-06 13:30:29.138325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.329 [2024-12-06 13:30:29.138349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.329 [2024-12-06 13:30:29.138376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:36.329 [2024-12-06 13:30:29.138391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.329 [2024-12-06 13:30:29.138419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.329 [2024-12-06 13:30:29.138439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:36.329 [2024-12-06 13:30:29.138450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.329 [2024-12-06 13:30:29.138467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.330 [2024-12-06 13:30:29.138480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:35:36.330 [2024-12-06 13:30:29.138500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.330 [2024-12-06 13:30:29.138512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.330 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:35:36.330 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:35:36.330 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:35:36.330 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:36.330 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:36.330 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:36.330 13:30:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.330 13:30:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:36.330 13:30:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.330 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:35:36.330 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:35:36.589 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:36.589 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:36.589 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:35:36.589 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:35:36.589 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:36.589 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:35:36.589 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:35:36.589 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:35:36.589 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:35:36.589 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:35:36.589 13:30:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:48.863 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:35:48.863 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:35:48.863 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:35:48.863 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:48.863 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:48.863 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.863 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:35:48.863 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.27 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.27 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:35:48.863 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.27 00:35:48.863 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.27 2 00:35:48.863 remove_attach_helper took 45.27s to complete (handling 2 nvme drive(s)) 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:35:48.863 13:30:41 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69474 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69474 ']' 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69474 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69474 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69474' 00:35:48.863 killing process with pid 69474 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69474 00:35:48.863 13:30:41 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69474 00:35:52.164 13:30:44 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:52.164 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:52.729 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:52.729 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:52.729 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:35:52.986 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:35:52.986 00:35:52.986 real 2m34.132s 00:35:52.986 user 1m52.732s 00:35:52.986 sys 0m21.925s 00:35:52.986 13:30:45 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.986 13:30:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:52.986 ************************************ 00:35:52.986 END TEST sw_hotplug 00:35:52.986 ************************************ 00:35:52.986 13:30:46 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:35:52.986 13:30:46 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:35:52.986 13:30:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:52.986 13:30:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.986 13:30:46 -- common/autotest_common.sh@10 -- # set +x 00:35:52.986 ************************************ 00:35:52.986 START TEST nvme_xnvme 00:35:52.986 ************************************ 00:35:52.986 13:30:46 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:35:53.244 * Looking for test storage... 00:35:53.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.245 13:30:46 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:53.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.245 --rc genhtml_branch_coverage=1 00:35:53.245 --rc genhtml_function_coverage=1 00:35:53.245 --rc genhtml_legend=1 00:35:53.245 --rc geninfo_all_blocks=1 00:35:53.245 --rc geninfo_unexecuted_blocks=1 00:35:53.245 00:35:53.245 ' 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:53.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.245 --rc genhtml_branch_coverage=1 00:35:53.245 --rc genhtml_function_coverage=1 00:35:53.245 --rc genhtml_legend=1 00:35:53.245 --rc geninfo_all_blocks=1 00:35:53.245 --rc geninfo_unexecuted_blocks=1 00:35:53.245 00:35:53.245 ' 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:53.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.245 --rc genhtml_branch_coverage=1 00:35:53.245 --rc genhtml_function_coverage=1 00:35:53.245 --rc genhtml_legend=1 00:35:53.245 --rc geninfo_all_blocks=1 00:35:53.245 --rc geninfo_unexecuted_blocks=1 00:35:53.245 00:35:53.245 ' 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:53.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.245 --rc genhtml_branch_coverage=1 00:35:53.245 --rc genhtml_function_coverage=1 00:35:53.245 --rc genhtml_legend=1 00:35:53.245 --rc geninfo_all_blocks=1 00:35:53.245 --rc geninfo_unexecuted_blocks=1 00:35:53.245 00:35:53.245 ' 00:35:53.245 13:30:46 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:35:53.245 13:30:46 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:35:53.245 13:30:46 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:35:53.245 13:30:46 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:35:53.246 13:30:46 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:35:53.246 13:30:46 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:35:53.246 #define SPDK_CONFIG_H 00:35:53.246 #define SPDK_CONFIG_AIO_FSDEV 1 00:35:53.246 #define SPDK_CONFIG_APPS 1 00:35:53.246 #define SPDK_CONFIG_ARCH native 00:35:53.246 #define SPDK_CONFIG_ASAN 1 00:35:53.246 #undef SPDK_CONFIG_AVAHI 00:35:53.246 #undef SPDK_CONFIG_CET 00:35:53.246 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:35:53.246 #define SPDK_CONFIG_COVERAGE 1 00:35:53.246 #define SPDK_CONFIG_CROSS_PREFIX 00:35:53.246 #undef SPDK_CONFIG_CRYPTO 00:35:53.246 #undef SPDK_CONFIG_CRYPTO_MLX5 00:35:53.246 #undef SPDK_CONFIG_CUSTOMOCF 00:35:53.246 #undef SPDK_CONFIG_DAOS 00:35:53.246 #define SPDK_CONFIG_DAOS_DIR 00:35:53.246 #define SPDK_CONFIG_DEBUG 1 00:35:53.246 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:35:53.246 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:35:53.246 #define SPDK_CONFIG_DPDK_INC_DIR 00:35:53.246 #define SPDK_CONFIG_DPDK_LIB_DIR 00:35:53.246 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:35:53.246 #undef SPDK_CONFIG_DPDK_UADK 00:35:53.246 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:53.246 #define SPDK_CONFIG_EXAMPLES 1 00:35:53.246 #undef SPDK_CONFIG_FC 00:35:53.246 #define SPDK_CONFIG_FC_PATH 00:35:53.246 #define SPDK_CONFIG_FIO_PLUGIN 1 00:35:53.246 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:35:53.246 #define SPDK_CONFIG_FSDEV 1 00:35:53.246 #undef SPDK_CONFIG_FUSE 00:35:53.246 #undef SPDK_CONFIG_FUZZER 00:35:53.246 #define SPDK_CONFIG_FUZZER_LIB 00:35:53.246 #undef SPDK_CONFIG_GOLANG 00:35:53.246 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:35:53.246 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:35:53.246 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:35:53.246 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:35:53.246 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:35:53.246 #undef SPDK_CONFIG_HAVE_LIBBSD 00:35:53.246 #undef SPDK_CONFIG_HAVE_LZ4 00:35:53.246 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:35:53.246 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:35:53.246 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:35:53.246 #define SPDK_CONFIG_IDXD 1 00:35:53.246 #define SPDK_CONFIG_IDXD_KERNEL 1 00:35:53.246 #undef SPDK_CONFIG_IPSEC_MB 00:35:53.246 #define SPDK_CONFIG_IPSEC_MB_DIR 00:35:53.246 #define SPDK_CONFIG_ISAL 1 00:35:53.246 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:35:53.246 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:35:53.246 #define SPDK_CONFIG_LIBDIR 00:35:53.246 #undef SPDK_CONFIG_LTO 00:35:53.246 #define SPDK_CONFIG_MAX_LCORES 128 00:35:53.246 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:35:53.246 #define SPDK_CONFIG_NVME_CUSE 1 00:35:53.246 #undef SPDK_CONFIG_OCF 00:35:53.246 #define SPDK_CONFIG_OCF_PATH 00:35:53.246 #define SPDK_CONFIG_OPENSSL_PATH 00:35:53.246 #undef SPDK_CONFIG_PGO_CAPTURE 00:35:53.246 #define SPDK_CONFIG_PGO_DIR 00:35:53.246 #undef SPDK_CONFIG_PGO_USE 00:35:53.246 #define SPDK_CONFIG_PREFIX /usr/local 00:35:53.246 #undef SPDK_CONFIG_RAID5F 00:35:53.246 #undef SPDK_CONFIG_RBD 00:35:53.246 #define SPDK_CONFIG_RDMA 1 00:35:53.246 #define SPDK_CONFIG_RDMA_PROV verbs 00:35:53.246 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:35:53.246 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:35:53.246 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:35:53.246 #define SPDK_CONFIG_SHARED 1 00:35:53.246 #undef SPDK_CONFIG_SMA 00:35:53.246 #define SPDK_CONFIG_TESTS 1 00:35:53.246 #undef SPDK_CONFIG_TSAN 00:35:53.246 #define SPDK_CONFIG_UBLK 1 00:35:53.246 #define SPDK_CONFIG_UBSAN 1 00:35:53.246 #undef SPDK_CONFIG_UNIT_TESTS 00:35:53.246 #undef SPDK_CONFIG_URING 00:35:53.246 #define SPDK_CONFIG_URING_PATH 00:35:53.246 #undef SPDK_CONFIG_URING_ZNS 00:35:53.246 #undef SPDK_CONFIG_USDT 00:35:53.246 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:35:53.246 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:35:53.246 #undef SPDK_CONFIG_VFIO_USER 00:35:53.246 #define SPDK_CONFIG_VFIO_USER_DIR 00:35:53.246 #define SPDK_CONFIG_VHOST 1 00:35:53.246 #define SPDK_CONFIG_VIRTIO 1 00:35:53.246 #undef SPDK_CONFIG_VTUNE 00:35:53.246 #define SPDK_CONFIG_VTUNE_DIR 00:35:53.246 #define SPDK_CONFIG_WERROR 1 00:35:53.246 #define SPDK_CONFIG_WPDK_DIR 00:35:53.246 #define SPDK_CONFIG_XNVME 1 00:35:53.246 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:35:53.246 13:30:46 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:35:53.246 13:30:46 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:53.246 13:30:46 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.246 13:30:46 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.246 13:30:46 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.246 13:30:46 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.246 13:30:46 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.246 13:30:46 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.246 13:30:46 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.246 13:30:46 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:35:53.246 13:30:46 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.246 13:30:46 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:35:53.246 13:30:46 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:35:53.246 13:30:46 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:35:53.246 13:30:46 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:35:53.246 13:30:46 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:35:53.246 13:30:46 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:35:53.246 13:30:46 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:35:53.246 13:30:46 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@68 -- # uname -s 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:35:53.247 13:30:46 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:35:53.247 13:30:46 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:35:53.248 13:30:46 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:35:53.507 13:30:46 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70820 ]] 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70820 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.mRVtEE 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.mRVtEE/tests/xnvme /tmp/spdk.mRVtEE 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975322624 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592432640 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975322624 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592432640 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266273792 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=94765391872 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4937388032 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:35:53.508 * Looking for test storage... 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975322624 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:53.508 13:30:46 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:53.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:53.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.509 --rc genhtml_branch_coverage=1 00:35:53.509 --rc genhtml_function_coverage=1 00:35:53.509 --rc genhtml_legend=1 00:35:53.509 --rc geninfo_all_blocks=1 00:35:53.509 --rc geninfo_unexecuted_blocks=1 00:35:53.509 00:35:53.509 ' 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:53.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.509 --rc genhtml_branch_coverage=1 00:35:53.509 --rc genhtml_function_coverage=1 00:35:53.509 --rc genhtml_legend=1 00:35:53.509 --rc geninfo_all_blocks=1 00:35:53.509 --rc geninfo_unexecuted_blocks=1 00:35:53.509 00:35:53.509 ' 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:53.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.509 --rc genhtml_branch_coverage=1 00:35:53.509 --rc genhtml_function_coverage=1 00:35:53.509 --rc genhtml_legend=1 00:35:53.509 --rc geninfo_all_blocks=1 00:35:53.509 --rc geninfo_unexecuted_blocks=1 00:35:53.509 00:35:53.509 ' 00:35:53.509 13:30:46 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:53.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.509 --rc genhtml_branch_coverage=1 00:35:53.509 --rc genhtml_function_coverage=1 00:35:53.509 --rc genhtml_legend=1 00:35:53.509 --rc geninfo_all_blocks=1 00:35:53.509 --rc geninfo_unexecuted_blocks=1 00:35:53.509 00:35:53.509 ' 00:35:53.509 13:30:46 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.509 13:30:46 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.509 13:30:46 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.509 13:30:46 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.509 13:30:46 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.509 13:30:46 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:35:53.509 13:30:46 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:35:53.509 13:30:46 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:54.078 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:54.349 Waiting for block devices as requested 00:35:54.349 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:54.349 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:54.607 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:54.607 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:59.870 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:59.870 13:30:52 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:36:00.128 13:30:53 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:36:00.128 13:30:53 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:36:00.386 13:30:53 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:36:00.386 13:30:53 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:36:00.386 No valid GPT data, bailing 00:36:00.386 13:30:53 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:00.386 13:30:53 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:36:00.386 13:30:53 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:36:00.386 13:30:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:36:00.386 13:30:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:00.386 13:30:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.386 13:30:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:00.386 ************************************ 00:36:00.386 START TEST xnvme_rpc 00:36:00.386 ************************************ 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71220 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71220 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71220 ']' 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:00.386 13:30:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:00.644 [2024-12-06 13:30:53.583773] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:36:00.644 [2024-12-06 13:30:53.583961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71220 ] 00:36:00.902 [2024-12-06 13:30:53.782301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.902 [2024-12-06 13:30:53.972153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.275 xnvme_bdev 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71220 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71220 ']' 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71220 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71220 00:36:02.275 killing process with pid 71220 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71220' 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71220 00:36:02.275 13:30:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71220 00:36:05.558 00:36:05.558 real 0m4.867s 00:36:05.558 user 0m4.774s 00:36:05.558 sys 0m0.792s 00:36:05.558 13:30:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:05.558 ************************************ 00:36:05.558 END TEST xnvme_rpc 00:36:05.558 ************************************ 00:36:05.558 13:30:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:05.558 13:30:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:36:05.558 13:30:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:05.558 13:30:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:05.558 13:30:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:05.558 ************************************ 00:36:05.558 START TEST xnvme_bdevperf 00:36:05.558 ************************************ 00:36:05.558 13:30:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:36:05.558 13:30:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:36:05.558 13:30:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:36:05.558 13:30:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:05.558 13:30:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:36:05.558 13:30:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:05.558 13:30:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:05.558 13:30:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:05.558 { 00:36:05.559 "subsystems": [ 00:36:05.559 { 00:36:05.559 "subsystem": "bdev", 00:36:05.559 "config": [ 00:36:05.559 { 00:36:05.559 "params": { 00:36:05.559 "io_mechanism": "libaio", 00:36:05.559 "conserve_cpu": false, 00:36:05.559 "filename": "/dev/nvme0n1", 00:36:05.559 "name": "xnvme_bdev" 00:36:05.559 }, 00:36:05.559 "method": "bdev_xnvme_create" 00:36:05.559 }, 00:36:05.559 { 00:36:05.559 "method": "bdev_wait_for_examine" 00:36:05.559 } 00:36:05.559 ] 00:36:05.559 } 00:36:05.559 ] 00:36:05.559 } 00:36:05.559 [2024-12-06 13:30:58.512526] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:36:05.559 [2024-12-06 13:30:58.512976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71313 ] 00:36:05.817 [2024-12-06 13:30:58.723496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.075 [2024-12-06 13:30:58.923691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.334 Running I/O for 5 seconds... 00:36:08.700 24258.00 IOPS, 94.76 MiB/s [2024-12-06T13:31:02.737Z] 26382.50 IOPS, 103.06 MiB/s [2024-12-06T13:31:03.672Z] 27954.00 IOPS, 109.20 MiB/s [2024-12-06T13:31:04.606Z] 28482.50 IOPS, 111.26 MiB/s 00:36:11.506 Latency(us) 00:36:11.506 [2024-12-06T13:31:04.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.506 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:36:11.506 xnvme_bdev : 5.00 28826.71 112.60 0.00 0.00 2215.08 85.82 43690.67 00:36:11.506 [2024-12-06T13:31:04.606Z] =================================================================================================================== 00:36:11.506 [2024-12-06T13:31:04.606Z] Total : 28826.71 112.60 0.00 0.00 2215.08 85.82 43690.67 00:36:12.880 13:31:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:12.880 13:31:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:36:12.880 13:31:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:12.880 13:31:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:12.880 13:31:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.880 { 00:36:12.880 "subsystems": [ 00:36:12.880 { 00:36:12.880 "subsystem": "bdev", 00:36:12.880 "config": [ 00:36:12.880 { 00:36:12.880 "params": { 00:36:12.880 "io_mechanism": "libaio", 00:36:12.880 "conserve_cpu": false, 00:36:12.880 "filename": "/dev/nvme0n1", 00:36:12.880 "name": "xnvme_bdev" 00:36:12.880 }, 00:36:12.880 "method": "bdev_xnvme_create" 00:36:12.880 }, 00:36:12.880 { 00:36:12.880 "method": "bdev_wait_for_examine" 00:36:12.880 } 00:36:12.880 ] 00:36:12.880 } 00:36:12.880 ] 00:36:12.880 } 00:36:12.880 [2024-12-06 13:31:05.966035] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:36:12.880 [2024-12-06 13:31:05.966533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71394 ] 00:36:13.138 [2024-12-06 13:31:06.159293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.395 [2024-12-06 13:31:06.317123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.959 Running I/O for 5 seconds... 00:36:15.888 31644.00 IOPS, 123.61 MiB/s [2024-12-06T13:31:09.922Z] 30428.50 IOPS, 118.86 MiB/s [2024-12-06T13:31:10.857Z] 29723.33 IOPS, 116.11 MiB/s [2024-12-06T13:31:11.791Z] 30552.25 IOPS, 119.34 MiB/s 00:36:18.691 Latency(us) 00:36:18.691 [2024-12-06T13:31:11.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:18.691 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:36:18.691 xnvme_bdev : 5.00 29930.70 116.92 0.00 0.00 2133.08 246.74 5648.58 00:36:18.691 [2024-12-06T13:31:11.791Z] =================================================================================================================== 00:36:18.691 [2024-12-06T13:31:11.792Z] Total : 29930.70 116.92 0.00 0.00 2133.08 246.74 5648.58 00:36:20.598 00:36:20.598 real 0m14.885s 00:36:20.598 user 0m5.962s 00:36:20.598 sys 0m6.102s 00:36:20.598 ************************************ 00:36:20.598 END TEST xnvme_bdevperf 00:36:20.598 ************************************ 00:36:20.598 13:31:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.598 13:31:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:20.598 13:31:13 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:36:20.598 13:31:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:20.598 13:31:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:20.598 13:31:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:20.598 ************************************ 00:36:20.598 START TEST xnvme_fio_plugin 00:36:20.598 ************************************ 00:36:20.598 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:36:20.598 13:31:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:36:20.598 13:31:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:36:20.598 13:31:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:20.599 13:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:20.599 { 00:36:20.599 "subsystems": [ 00:36:20.599 { 00:36:20.599 "subsystem": "bdev", 00:36:20.599 "config": [ 00:36:20.599 { 00:36:20.599 "params": { 00:36:20.599 "io_mechanism": "libaio", 00:36:20.599 "conserve_cpu": false, 00:36:20.599 "filename": "/dev/nvme0n1", 00:36:20.599 "name": "xnvme_bdev" 00:36:20.599 }, 00:36:20.599 "method": "bdev_xnvme_create" 00:36:20.599 }, 00:36:20.599 { 00:36:20.599 "method": "bdev_wait_for_examine" 00:36:20.599 } 00:36:20.599 ] 00:36:20.599 } 00:36:20.599 ] 00:36:20.599 } 00:36:20.599 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:36:20.599 fio-3.35 00:36:20.599 Starting 1 thread 00:36:27.262 00:36:27.262 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71530: Fri Dec 6 13:31:19 2024 00:36:27.262 read: IOPS=29.3k, BW=114MiB/s (120MB/s)(572MiB/5001msec) 00:36:27.262 slat (usec): min=5, max=884, avg=30.51, stdev=25.33 00:36:27.262 clat (usec): min=103, max=5833, avg=1233.69, stdev=666.57 00:36:27.262 lat (usec): min=163, max=5867, avg=1264.20, stdev=668.74 00:36:27.262 clat percentiles (usec): 00:36:27.262 | 1.00th=[ 227], 5.00th=[ 334], 10.00th=[ 437], 20.00th=[ 619], 00:36:27.262 | 30.00th=[ 799], 40.00th=[ 971], 50.00th=[ 1156], 60.00th=[ 1336], 00:36:27.262 | 70.00th=[ 1532], 80.00th=[ 1795], 90.00th=[ 2147], 95.00th=[ 2376], 00:36:27.262 | 99.00th=[ 3064], 99.50th=[ 3687], 99.90th=[ 4555], 99.95th=[ 4752], 00:36:27.262 | 99.99th=[ 5211] 00:36:27.262 bw ( KiB/s): min=100240, max=134666, per=99.29%, avg=116267.56, stdev=10015.67, samples=9 00:36:27.262 iops : min=25060, max=33666, avg=29066.78, stdev=2503.79, samples=9 00:36:27.262 lat (usec) : 250=1.69%, 500=11.76%, 750=13.74%, 1000=14.31% 00:36:27.262 lat (msec) : 2=44.71%, 4=13.48%, 10=0.31% 00:36:27.262 cpu : usr=22.30%, sys=53.66%, ctx=108, majf=0, minf=764 00:36:27.262 IO depths : 1=0.1%, 2=1.1%, 4=5.0%, 8=12.4%, 16=26.1%, 32=53.6%, >=64=1.7% 00:36:27.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.262 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:36:27.262 issued rwts: total=146405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:36:27.262 00:36:27.262 Run status group 0 (all jobs): 00:36:27.262 READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=572MiB (600MB), run=5001-5001msec 00:36:28.199 ----------------------------------------------------- 00:36:28.199 Suppressions used: 00:36:28.199 count bytes template 00:36:28.199 1 11 /usr/src/fio/parse.c 00:36:28.199 1 8 libtcmalloc_minimal.so 00:36:28.199 1 904 libcrypto.so 00:36:28.199 ----------------------------------------------------- 00:36:28.199 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:28.199 13:31:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:28.199 { 00:36:28.199 "subsystems": [ 00:36:28.199 { 00:36:28.199 "subsystem": "bdev", 00:36:28.199 "config": [ 00:36:28.199 { 00:36:28.199 "params": { 00:36:28.199 "io_mechanism": "libaio", 00:36:28.199 "conserve_cpu": false, 00:36:28.199 "filename": "/dev/nvme0n1", 00:36:28.199 "name": "xnvme_bdev" 00:36:28.199 }, 00:36:28.199 "method": "bdev_xnvme_create" 00:36:28.199 }, 00:36:28.199 { 00:36:28.199 "method": "bdev_wait_for_examine" 00:36:28.199 } 00:36:28.199 ] 00:36:28.199 } 00:36:28.199 ] 00:36:28.199 } 00:36:28.459 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:36:28.459 fio-3.35 00:36:28.459 Starting 1 thread 00:36:35.015 00:36:35.015 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71622: Fri Dec 6 13:31:27 2024 00:36:35.015 write: IOPS=29.0k, BW=113MiB/s (119MB/s)(567MiB/5001msec); 0 zone resets 00:36:35.015 slat (usec): min=5, max=845, avg=30.73, stdev=26.19 00:36:35.015 clat (usec): min=87, max=7301, avg=1240.61, stdev=697.39 00:36:35.015 lat (usec): min=127, max=7429, avg=1271.34, stdev=700.86 00:36:35.015 clat percentiles (usec): 00:36:35.015 | 1.00th=[ 231], 5.00th=[ 334], 10.00th=[ 437], 20.00th=[ 611], 00:36:35.015 | 30.00th=[ 766], 40.00th=[ 930], 50.00th=[ 1106], 60.00th=[ 1319], 00:36:35.015 | 70.00th=[ 1582], 80.00th=[ 1876], 90.00th=[ 2212], 95.00th=[ 2442], 00:36:35.015 | 99.00th=[ 3130], 99.50th=[ 3654], 99.90th=[ 4490], 99.95th=[ 4883], 00:36:35.015 | 99.99th=[ 6390] 00:36:35.015 bw ( KiB/s): min=93320, max=172942, per=100.00%, avg=118061.00, stdev=26462.70, samples=9 00:36:35.015 iops : min=23330, max=43235, avg=29515.11, stdev=6615.62, samples=9 00:36:35.015 lat (usec) : 100=0.01%, 250=1.56%, 500=11.88%, 750=15.32%, 1000=15.64% 00:36:35.015 lat (msec) : 2=39.65%, 4=15.69%, 10=0.28% 00:36:35.015 cpu : usr=23.40%, sys=53.34%, ctx=82, majf=0, minf=765 00:36:35.015 IO depths : 1=0.1%, 2=1.5%, 4=5.2%, 8=12.0%, 16=25.6%, 32=53.9%, >=64=1.7% 00:36:35.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:35.015 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:36:35.015 issued rwts: total=0,145257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:35.015 latency : target=0, window=0, percentile=100.00%, depth=64 00:36:35.015 00:36:35.015 Run status group 0 (all jobs): 00:36:35.015 WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=567MiB (595MB), run=5001-5001msec 00:36:35.953 ----------------------------------------------------- 00:36:35.953 Suppressions used: 00:36:35.953 count bytes template 00:36:35.953 1 11 /usr/src/fio/parse.c 00:36:35.953 1 8 libtcmalloc_minimal.so 00:36:35.953 1 904 libcrypto.so 00:36:35.953 ----------------------------------------------------- 00:36:35.953 00:36:36.212 ************************************ 00:36:36.212 END TEST xnvme_fio_plugin 00:36:36.212 ************************************ 00:36:36.212 00:36:36.212 real 0m15.745s 00:36:36.212 user 0m6.682s 00:36:36.212 sys 0m6.369s 00:36:36.212 13:31:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:36.212 13:31:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:36.212 13:31:29 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:36:36.212 13:31:29 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:36:36.212 13:31:29 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:36:36.212 13:31:29 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:36:36.212 13:31:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:36.212 13:31:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:36.212 13:31:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:36.212 ************************************ 00:36:36.212 START TEST xnvme_rpc 00:36:36.212 ************************************ 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:36:36.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71714 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71714 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71714 ']' 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:36.212 13:31:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:36.212 [2024-12-06 13:31:29.273686] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:36:36.212 [2024-12-06 13:31:29.274050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71714 ] 00:36:36.472 [2024-12-06 13:31:29.452886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:36.732 [2024-12-06 13:31:29.609286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.669 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:37.669 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:36:37.669 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:36:37.669 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.669 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:37.669 xnvme_bdev 00:36:37.669 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.669 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:36:37.669 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:36:37.669 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71714 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71714 ']' 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71714 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71714 00:36:37.928 killing process with pid 71714 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71714' 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71714 00:36:37.928 13:31:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71714 00:36:41.214 00:36:41.214 real 0m4.728s 00:36:41.214 user 0m4.640s 00:36:41.214 sys 0m0.792s 00:36:41.214 13:31:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:41.214 ************************************ 00:36:41.214 END TEST xnvme_rpc 00:36:41.214 ************************************ 00:36:41.215 13:31:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:41.215 13:31:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:36:41.215 13:31:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:41.215 13:31:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.215 13:31:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:41.215 ************************************ 00:36:41.215 START TEST xnvme_bdevperf 00:36:41.215 ************************************ 00:36:41.215 13:31:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:36:41.215 13:31:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:36:41.215 13:31:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:36:41.215 13:31:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:41.215 13:31:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:41.215 13:31:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:36:41.215 13:31:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:41.215 13:31:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:41.215 { 00:36:41.215 "subsystems": [ 00:36:41.215 { 00:36:41.215 "subsystem": "bdev", 00:36:41.215 "config": [ 00:36:41.215 { 00:36:41.215 "params": { 00:36:41.215 "io_mechanism": "libaio", 00:36:41.215 "conserve_cpu": true, 00:36:41.215 "filename": "/dev/nvme0n1", 00:36:41.215 "name": "xnvme_bdev" 00:36:41.215 }, 00:36:41.215 "method": "bdev_xnvme_create" 00:36:41.215 }, 00:36:41.215 { 00:36:41.215 "method": "bdev_wait_for_examine" 00:36:41.215 } 00:36:41.215 ] 00:36:41.215 } 00:36:41.215 ] 00:36:41.215 } 00:36:41.215 [2024-12-06 13:31:34.053170] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:36:41.215 [2024-12-06 13:31:34.053369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71810 ] 00:36:41.215 [2024-12-06 13:31:34.256535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.476 [2024-12-06 13:31:34.429797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.043 Running I/O for 5 seconds... 00:36:43.910 30952.00 IOPS, 120.91 MiB/s [2024-12-06T13:31:37.971Z] 30926.00 IOPS, 120.80 MiB/s [2024-12-06T13:31:38.904Z] 32157.67 IOPS, 125.62 MiB/s [2024-12-06T13:31:40.277Z] 32345.75 IOPS, 126.35 MiB/s 00:36:47.177 Latency(us) 00:36:47.177 [2024-12-06T13:31:40.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:47.177 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:36:47.177 xnvme_bdev : 5.00 31593.17 123.41 0.00 0.00 2021.00 280.87 6459.98 00:36:47.177 [2024-12-06T13:31:40.277Z] =================================================================================================================== 00:36:47.177 [2024-12-06T13:31:40.277Z] Total : 31593.17 123.41 0.00 0.00 2021.00 280.87 6459.98 00:36:48.551 13:31:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:48.552 13:31:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:36:48.552 13:31:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:36:48.552 13:31:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:48.552 13:31:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:48.552 { 00:36:48.552 "subsystems": [ 00:36:48.552 { 00:36:48.552 "subsystem": "bdev", 00:36:48.552 "config": [ 00:36:48.552 { 00:36:48.552 "params": { 00:36:48.552 "io_mechanism": "libaio", 00:36:48.552 "conserve_cpu": true, 00:36:48.552 "filename": "/dev/nvme0n1", 00:36:48.552 "name": "xnvme_bdev" 00:36:48.552 }, 00:36:48.552 "method": "bdev_xnvme_create" 00:36:48.552 }, 00:36:48.552 { 00:36:48.552 "method": "bdev_wait_for_examine" 00:36:48.552 } 00:36:48.552 ] 00:36:48.552 } 00:36:48.552 ] 00:36:48.552 } 00:36:48.552 [2024-12-06 13:31:41.491282] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:36:48.552 [2024-12-06 13:31:41.491534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71892 ] 00:36:48.809 [2024-12-06 13:31:41.692309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.809 [2024-12-06 13:31:41.853862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.393 Running I/O for 5 seconds... 00:36:51.319 28969.00 IOPS, 113.16 MiB/s [2024-12-06T13:31:45.354Z] 27886.00 IOPS, 108.93 MiB/s [2024-12-06T13:31:46.724Z] 28569.67 IOPS, 111.60 MiB/s [2024-12-06T13:31:47.657Z] 27580.50 IOPS, 107.74 MiB/s 00:36:54.558 Latency(us) 00:36:54.558 [2024-12-06T13:31:47.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.558 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:36:54.558 xnvme_bdev : 5.00 28068.45 109.64 0.00 0.00 2274.52 68.27 48933.55 00:36:54.558 [2024-12-06T13:31:47.658Z] =================================================================================================================== 00:36:54.558 [2024-12-06T13:31:47.658Z] Total : 28068.45 109.64 0.00 0.00 2274.52 68.27 48933.55 00:36:55.959 00:36:55.959 real 0m14.830s 00:36:55.959 user 0m6.003s 00:36:55.959 sys 0m5.988s 00:36:55.959 13:31:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:55.959 13:31:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:55.959 ************************************ 00:36:55.959 END TEST xnvme_bdevperf 00:36:55.959 ************************************ 00:36:55.959 13:31:48 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:36:55.959 13:31:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:55.959 13:31:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:55.959 13:31:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:55.959 ************************************ 00:36:55.959 START TEST xnvme_fio_plugin 00:36:55.959 ************************************ 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:55.959 13:31:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:36:55.959 { 00:36:55.959 "subsystems": [ 00:36:55.959 { 00:36:55.959 "subsystem": "bdev", 00:36:55.959 "config": [ 00:36:55.959 { 00:36:55.959 "params": { 00:36:55.959 "io_mechanism": "libaio", 00:36:55.959 "conserve_cpu": true, 00:36:55.959 "filename": "/dev/nvme0n1", 00:36:55.959 "name": "xnvme_bdev" 00:36:55.959 }, 00:36:55.959 "method": "bdev_xnvme_create" 00:36:55.959 }, 00:36:55.959 { 00:36:55.959 "method": "bdev_wait_for_examine" 00:36:55.959 } 00:36:55.959 ] 00:36:55.959 } 00:36:55.959 ] 00:36:55.959 } 00:36:56.218 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:36:56.218 fio-3.35 00:36:56.218 Starting 1 thread 00:37:02.801 00:37:02.801 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72019: Fri Dec 6 13:31:55 2024 00:37:02.801 read: IOPS=27.7k, BW=108MiB/s (113MB/s)(541MiB/5001msec) 00:37:02.801 slat (usec): min=5, max=1410, avg=32.39, stdev=27.15 00:37:02.801 clat (usec): min=103, max=5609, avg=1285.59, stdev=711.03 00:37:02.801 lat (usec): min=154, max=5715, avg=1317.99, stdev=713.63 00:37:02.801 clat percentiles (usec): 00:37:02.801 | 1.00th=[ 227], 5.00th=[ 330], 10.00th=[ 437], 20.00th=[ 627], 00:37:02.801 | 30.00th=[ 807], 40.00th=[ 988], 50.00th=[ 1188], 60.00th=[ 1401], 00:37:02.801 | 70.00th=[ 1647], 80.00th=[ 1909], 90.00th=[ 2212], 95.00th=[ 2474], 00:37:02.801 | 99.00th=[ 3326], 99.50th=[ 3884], 99.90th=[ 4686], 99.95th=[ 4883], 00:37:02.801 | 99.99th=[ 5276] 00:37:02.801 bw ( KiB/s): min=98840, max=132192, per=100.00%, avg=112023.11, stdev=12139.68, samples=9 00:37:02.801 iops : min=24710, max=33048, avg=28005.78, stdev=3034.92, samples=9 00:37:02.801 lat (usec) : 250=1.70%, 500=11.49%, 750=13.39%, 1000=14.02% 00:37:02.801 lat (msec) : 2=42.63%, 4=16.35%, 10=0.41% 00:37:02.801 cpu : usr=22.32%, sys=52.96%, ctx=128, majf=0, minf=764 00:37:02.801 IO depths : 1=0.1%, 2=1.6%, 4=5.4%, 8=12.1%, 16=25.7%, 32=53.4%, >=64=1.7% 00:37:02.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.801 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:37:02.801 issued rwts: total=138562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.801 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:02.801 00:37:02.801 Run status group 0 (all jobs): 00:37:02.801 READ: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=541MiB (568MB), run=5001-5001msec 00:37:03.735 ----------------------------------------------------- 00:37:03.735 Suppressions used: 00:37:03.735 count bytes template 00:37:03.735 1 11 /usr/src/fio/parse.c 00:37:03.735 1 8 libtcmalloc_minimal.so 00:37:03.735 1 904 libcrypto.so 00:37:03.735 ----------------------------------------------------- 00:37:03.735 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:03.735 13:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:03.735 { 00:37:03.735 "subsystems": [ 00:37:03.735 { 00:37:03.735 "subsystem": "bdev", 00:37:03.735 "config": [ 00:37:03.735 { 00:37:03.735 "params": { 00:37:03.735 "io_mechanism": "libaio", 00:37:03.735 "conserve_cpu": true, 00:37:03.735 "filename": "/dev/nvme0n1", 00:37:03.735 "name": "xnvme_bdev" 00:37:03.735 }, 00:37:03.735 "method": "bdev_xnvme_create" 00:37:03.735 }, 00:37:03.735 { 00:37:03.735 "method": "bdev_wait_for_examine" 00:37:03.735 } 00:37:03.735 ] 00:37:03.735 } 00:37:03.735 ] 00:37:03.735 } 00:37:03.993 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:03.993 fio-3.35 00:37:03.993 Starting 1 thread 00:37:10.551 00:37:10.551 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72122: Fri Dec 6 13:32:02 2024 00:37:10.551 write: IOPS=29.0k, BW=113MiB/s (119MB/s)(567MiB/5001msec); 0 zone resets 00:37:10.551 slat (usec): min=5, max=832, avg=30.63, stdev=29.49 00:37:10.551 clat (usec): min=63, max=7415, avg=1250.19, stdev=699.35 00:37:10.551 lat (usec): min=97, max=7507, avg=1280.83, stdev=702.04 00:37:10.551 clat percentiles (usec): 00:37:10.551 | 1.00th=[ 229], 5.00th=[ 338], 10.00th=[ 441], 20.00th=[ 627], 00:37:10.551 | 30.00th=[ 799], 40.00th=[ 963], 50.00th=[ 1139], 60.00th=[ 1336], 00:37:10.551 | 70.00th=[ 1549], 80.00th=[ 1827], 90.00th=[ 2180], 95.00th=[ 2442], 00:37:10.551 | 99.00th=[ 3392], 99.50th=[ 3851], 99.90th=[ 4686], 99.95th=[ 4948], 00:37:10.551 | 99.99th=[ 5669] 00:37:10.551 bw ( KiB/s): min=95312, max=169752, per=100.00%, avg=116776.89, stdev=21576.69, samples=9 00:37:10.551 iops : min=23828, max=42438, avg=29194.22, stdev=5394.17, samples=9 00:37:10.551 lat (usec) : 100=0.01%, 250=1.57%, 500=11.41%, 750=14.21%, 1000=14.94% 00:37:10.551 lat (msec) : 2=43.23%, 4=14.25%, 10=0.39% 00:37:10.551 cpu : usr=22.68%, sys=53.24%, ctx=114, majf=0, minf=765 00:37:10.551 IO depths : 1=0.1%, 2=1.5%, 4=5.0%, 8=11.7%, 16=25.5%, 32=54.4%, >=64=1.7% 00:37:10.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.551 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:37:10.551 issued rwts: total=0,145143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:10.551 00:37:10.551 Run status group 0 (all jobs): 00:37:10.551 WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=567MiB (595MB), run=5001-5001msec 00:37:11.927 ----------------------------------------------------- 00:37:11.927 Suppressions used: 00:37:11.927 count bytes template 00:37:11.927 1 11 /usr/src/fio/parse.c 00:37:11.927 1 8 libtcmalloc_minimal.so 00:37:11.927 1 904 libcrypto.so 00:37:11.928 ----------------------------------------------------- 00:37:11.928 00:37:11.928 00:37:11.928 real 0m15.850s 00:37:11.928 user 0m6.729s 00:37:11.928 sys 0m6.339s 00:37:11.928 13:32:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:11.928 13:32:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:11.928 ************************************ 00:37:11.928 END TEST xnvme_fio_plugin 00:37:11.928 ************************************ 00:37:11.928 13:32:04 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:37:11.928 13:32:04 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:37:11.928 13:32:04 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:37:11.928 13:32:04 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:37:11.928 13:32:04 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:37:11.928 13:32:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:37:11.928 13:32:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:37:11.928 13:32:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:37:11.928 13:32:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:37:11.928 13:32:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:11.928 13:32:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:11.928 13:32:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:11.928 ************************************ 00:37:11.928 START TEST xnvme_rpc 00:37:11.928 ************************************ 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72218 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72218 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72218 ']' 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:11.928 13:32:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:11.928 [2024-12-06 13:32:04.865142] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:37:11.928 [2024-12-06 13:32:04.866175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72218 ] 00:37:12.187 [2024-12-06 13:32:05.059655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.187 [2024-12-06 13:32:05.252210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:13.562 xnvme_bdev 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72218 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72218 ']' 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72218 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72218 00:37:13.562 killing process with pid 72218 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72218' 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72218 00:37:13.562 13:32:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72218 00:37:16.843 00:37:16.843 real 0m4.776s 00:37:16.843 user 0m4.698s 00:37:16.843 sys 0m0.767s 00:37:16.843 13:32:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:16.843 13:32:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:16.843 ************************************ 00:37:16.843 END TEST xnvme_rpc 00:37:16.843 ************************************ 00:37:16.843 13:32:09 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:37:16.843 13:32:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:16.843 13:32:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:16.843 13:32:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:16.843 ************************************ 00:37:16.843 START TEST xnvme_bdevperf 00:37:16.843 ************************************ 00:37:16.843 13:32:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:37:16.843 13:32:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:37:16.843 13:32:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:37:16.843 13:32:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:16.843 13:32:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:37:16.843 13:32:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:16.843 13:32:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:16.843 13:32:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:16.843 { 00:37:16.843 "subsystems": [ 00:37:16.843 { 00:37:16.843 "subsystem": "bdev", 00:37:16.843 "config": [ 00:37:16.843 { 00:37:16.843 "params": { 00:37:16.843 "io_mechanism": "io_uring", 00:37:16.843 "conserve_cpu": false, 00:37:16.843 "filename": "/dev/nvme0n1", 00:37:16.843 "name": "xnvme_bdev" 00:37:16.843 }, 00:37:16.843 "method": "bdev_xnvme_create" 00:37:16.843 }, 00:37:16.843 { 00:37:16.843 "method": "bdev_wait_for_examine" 00:37:16.843 } 00:37:16.843 ] 00:37:16.843 } 00:37:16.843 ] 00:37:16.843 } 00:37:16.843 [2024-12-06 13:32:09.684926] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:37:16.843 [2024-12-06 13:32:09.685084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72305 ] 00:37:16.843 [2024-12-06 13:32:09.872673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.101 [2024-12-06 13:32:10.028045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.667 Running I/O for 5 seconds... 00:37:19.540 43112.00 IOPS, 168.41 MiB/s [2024-12-06T13:32:13.576Z] 46202.50 IOPS, 180.48 MiB/s [2024-12-06T13:32:14.513Z] 47053.33 IOPS, 183.80 MiB/s [2024-12-06T13:32:15.889Z] 46534.00 IOPS, 181.77 MiB/s 00:37:22.789 Latency(us) 00:37:22.789 [2024-12-06T13:32:15.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.789 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:37:22.789 xnvme_bdev : 5.00 46635.82 182.17 0.00 0.00 1368.11 349.14 8550.89 00:37:22.789 [2024-12-06T13:32:15.889Z] =================================================================================================================== 00:37:22.789 [2024-12-06T13:32:15.889Z] Total : 46635.82 182.17 0.00 0.00 1368.11 349.14 8550.89 00:37:23.751 13:32:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:23.751 13:32:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:37:23.751 13:32:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:23.751 13:32:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:23.751 13:32:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:24.010 { 00:37:24.010 "subsystems": [ 00:37:24.010 { 00:37:24.010 "subsystem": "bdev", 00:37:24.010 "config": [ 00:37:24.010 { 00:37:24.010 "params": { 00:37:24.010 "io_mechanism": "io_uring", 00:37:24.010 "conserve_cpu": false, 00:37:24.010 "filename": "/dev/nvme0n1", 00:37:24.010 "name": "xnvme_bdev" 00:37:24.010 }, 00:37:24.010 "method": "bdev_xnvme_create" 00:37:24.010 }, 00:37:24.010 { 00:37:24.010 "method": "bdev_wait_for_examine" 00:37:24.010 } 00:37:24.010 ] 00:37:24.010 } 00:37:24.010 ] 00:37:24.010 } 00:37:24.010 [2024-12-06 13:32:16.943682] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:37:24.010 [2024-12-06 13:32:16.943879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72388 ] 00:37:24.269 [2024-12-06 13:32:17.140242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.269 [2024-12-06 13:32:17.295547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.836 Running I/O for 5 seconds... 00:37:26.709 39961.00 IOPS, 156.10 MiB/s [2024-12-06T13:32:20.751Z] 40774.50 IOPS, 159.28 MiB/s [2024-12-06T13:32:22.214Z] 40903.33 IOPS, 159.78 MiB/s [2024-12-06T13:32:22.795Z] 40933.50 IOPS, 159.90 MiB/s 00:37:29.695 Latency(us) 00:37:29.695 [2024-12-06T13:32:22.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.695 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:37:29.695 xnvme_bdev : 5.00 41188.95 160.89 0.00 0.00 1548.44 118.49 6397.56 00:37:29.695 [2024-12-06T13:32:22.795Z] =================================================================================================================== 00:37:29.695 [2024-12-06T13:32:22.795Z] Total : 41188.95 160.89 0.00 0.00 1548.44 118.49 6397.56 00:37:31.087 00:37:31.087 real 0m14.529s 00:37:31.087 user 0m7.172s 00:37:31.087 sys 0m7.139s 00:37:31.087 13:32:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:31.087 ************************************ 00:37:31.087 END TEST xnvme_bdevperf 00:37:31.087 ************************************ 00:37:31.087 13:32:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.087 13:32:24 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:37:31.087 13:32:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:31.087 13:32:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:31.087 13:32:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:31.087 ************************************ 00:37:31.087 START TEST xnvme_fio_plugin 00:37:31.087 ************************************ 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:31.087 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:31.345 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:31.345 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:31.345 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:31.345 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:31.345 13:32:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:31.345 { 00:37:31.345 "subsystems": [ 00:37:31.345 { 00:37:31.345 "subsystem": "bdev", 00:37:31.345 "config": [ 00:37:31.345 { 00:37:31.345 "params": { 00:37:31.345 "io_mechanism": "io_uring", 00:37:31.345 "conserve_cpu": false, 00:37:31.345 "filename": "/dev/nvme0n1", 00:37:31.345 "name": "xnvme_bdev" 00:37:31.345 }, 00:37:31.345 "method": "bdev_xnvme_create" 00:37:31.345 }, 00:37:31.345 { 00:37:31.345 "method": "bdev_wait_for_examine" 00:37:31.345 } 00:37:31.345 ] 00:37:31.345 } 00:37:31.345 ] 00:37:31.345 } 00:37:31.604 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:31.604 fio-3.35 00:37:31.604 Starting 1 thread 00:37:38.164 00:37:38.164 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72522: Fri Dec 6 13:32:30 2024 00:37:38.164 read: IOPS=47.0k, BW=183MiB/s (192MB/s)(918MiB/5004msec) 00:37:38.164 slat (usec): min=2, max=880, avg= 4.11, stdev= 2.39 00:37:38.164 clat (usec): min=178, max=17020, avg=1200.24, stdev=236.22 00:37:38.164 lat (usec): min=184, max=17023, avg=1204.35, stdev=236.71 00:37:38.164 clat percentiles (usec): 00:37:38.164 | 1.00th=[ 865], 5.00th=[ 971], 10.00th=[ 1012], 20.00th=[ 1057], 00:37:38.164 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:37:38.164 | 70.00th=[ 1237], 80.00th=[ 1303], 90.00th=[ 1385], 95.00th=[ 1532], 00:37:38.164 | 99.00th=[ 1926], 99.50th=[ 2311], 99.90th=[ 3687], 99.95th=[ 4490], 00:37:38.164 | 99.99th=[ 5145] 00:37:38.164 bw ( KiB/s): min=180152, max=201216, per=100.00%, avg=189888.00, stdev=7467.42, samples=9 00:37:38.164 iops : min=45038, max=50304, avg=47472.00, stdev=1866.85, samples=9 00:37:38.164 lat (usec) : 250=0.02%, 500=0.12%, 750=0.24%, 1000=7.81% 00:37:38.164 lat (msec) : 2=91.01%, 4=0.72%, 10=0.08%, 20=0.01% 00:37:38.164 cpu : usr=34.06%, sys=65.00%, ctx=13, majf=0, minf=762 00:37:38.164 IO depths : 1=1.3%, 2=2.9%, 4=6.0%, 8=12.2%, 16=24.9%, 32=51.1%, >=64=1.7% 00:37:38.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:38.164 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:37:38.164 issued rwts: total=235031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:38.164 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:38.164 00:37:38.164 Run status group 0 (all jobs): 00:37:38.164 READ: bw=183MiB/s (192MB/s), 183MiB/s-183MiB/s (192MB/s-192MB/s), io=918MiB (963MB), run=5004-5004msec 00:37:39.098 ----------------------------------------------------- 00:37:39.098 Suppressions used: 00:37:39.098 count bytes template 00:37:39.098 1 11 /usr/src/fio/parse.c 00:37:39.098 1 8 libtcmalloc_minimal.so 00:37:39.098 1 904 libcrypto.so 00:37:39.098 ----------------------------------------------------- 00:37:39.098 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:37:39.098 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:39.099 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:39.099 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:39.099 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:37:39.099 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:39.099 13:32:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:37:39.099 { 00:37:39.099 "subsystems": [ 00:37:39.099 { 00:37:39.099 "subsystem": "bdev", 00:37:39.099 "config": [ 00:37:39.099 { 00:37:39.099 "params": { 00:37:39.099 "io_mechanism": "io_uring", 00:37:39.099 "conserve_cpu": false, 00:37:39.099 "filename": "/dev/nvme0n1", 00:37:39.099 "name": "xnvme_bdev" 00:37:39.099 }, 00:37:39.099 "method": "bdev_xnvme_create" 00:37:39.099 }, 00:37:39.099 { 00:37:39.099 "method": "bdev_wait_for_examine" 00:37:39.099 } 00:37:39.099 ] 00:37:39.099 } 00:37:39.099 ] 00:37:39.099 } 00:37:39.357 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:37:39.357 fio-3.35 00:37:39.357 Starting 1 thread 00:37:45.920 00:37:45.920 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72619: Fri Dec 6 13:32:38 2024 00:37:45.920 write: IOPS=45.1k, BW=176MiB/s (185MB/s)(881MiB/5001msec); 0 zone resets 00:37:45.920 slat (usec): min=2, max=107, avg= 4.49, stdev= 1.96 00:37:45.920 clat (usec): min=795, max=3205, avg=1240.32, stdev=229.99 00:37:45.920 lat (usec): min=798, max=3249, avg=1244.81, stdev=231.00 00:37:45.920 clat percentiles (usec): 00:37:45.920 | 1.00th=[ 873], 5.00th=[ 947], 10.00th=[ 1004], 20.00th=[ 1074], 00:37:45.920 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1254], 00:37:45.920 | 70.00th=[ 1303], 80.00th=[ 1369], 90.00th=[ 1516], 95.00th=[ 1696], 00:37:45.920 | 99.00th=[ 2040], 99.50th=[ 2147], 99.90th=[ 2474], 99.95th=[ 2671], 00:37:45.920 | 99.99th=[ 2999] 00:37:45.920 bw ( KiB/s): min=158208, max=211456, per=100.00%, avg=182328.89, stdev=17873.21, samples=9 00:37:45.920 iops : min=39552, max=52864, avg=45582.22, stdev=4468.30, samples=9 00:37:45.920 lat (usec) : 1000=9.53% 00:37:45.920 lat (msec) : 2=89.14%, 4=1.33% 00:37:45.920 cpu : usr=34.74%, sys=64.28%, ctx=15, majf=0, minf=763 00:37:45.920 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:37:45.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:45.920 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:37:45.920 issued rwts: total=0,225408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:45.920 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:45.920 00:37:45.920 Run status group 0 (all jobs): 00:37:45.920 WRITE: bw=176MiB/s (185MB/s), 176MiB/s-176MiB/s (185MB/s-185MB/s), io=881MiB (923MB), run=5001-5001msec 00:37:46.856 ----------------------------------------------------- 00:37:46.856 Suppressions used: 00:37:46.856 count bytes template 00:37:46.856 1 11 /usr/src/fio/parse.c 00:37:46.856 1 8 libtcmalloc_minimal.so 00:37:46.856 1 904 libcrypto.so 00:37:46.856 ----------------------------------------------------- 00:37:46.856 00:37:46.856 00:37:46.856 real 0m15.640s 00:37:46.856 user 0m7.806s 00:37:46.856 sys 0m7.466s 00:37:46.856 13:32:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:46.856 13:32:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:37:46.856 ************************************ 00:37:46.856 END TEST xnvme_fio_plugin 00:37:46.856 ************************************ 00:37:46.856 13:32:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:37:46.856 13:32:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:37:46.856 13:32:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:37:46.856 13:32:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:37:46.856 13:32:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:46.856 13:32:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:46.856 13:32:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:46.856 ************************************ 00:37:46.856 START TEST xnvme_rpc 00:37:46.856 ************************************ 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:37:46.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72712 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72712 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72712 ']' 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:46.856 13:32:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:47.115 [2024-12-06 13:32:39.997400] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:37:47.115 [2024-12-06 13:32:39.997586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72712 ] 00:37:47.115 [2024-12-06 13:32:40.180465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.374 [2024-12-06 13:32:40.329582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:48.378 xnvme_bdev 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:37:48.378 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72712 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72712 ']' 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72712 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72712 00:37:48.637 killing process with pid 72712 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72712' 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72712 00:37:48.637 13:32:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72712 00:37:51.926 ************************************ 00:37:51.926 END TEST xnvme_rpc 00:37:51.926 ************************************ 00:37:51.926 00:37:51.926 real 0m4.600s 00:37:51.926 user 0m4.524s 00:37:51.926 sys 0m0.731s 00:37:51.926 13:32:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.926 13:32:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:51.926 13:32:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:37:51.926 13:32:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:51.926 13:32:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.926 13:32:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:51.926 ************************************ 00:37:51.926 START TEST xnvme_bdevperf 00:37:51.926 ************************************ 00:37:51.926 13:32:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:37:51.926 13:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:37:51.926 13:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:37:51.926 13:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:51.926 13:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:37:51.926 13:32:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:51.926 13:32:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:51.926 13:32:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:51.926 { 00:37:51.926 "subsystems": [ 00:37:51.926 { 00:37:51.926 "subsystem": "bdev", 00:37:51.926 "config": [ 00:37:51.926 { 00:37:51.926 "params": { 00:37:51.926 "io_mechanism": "io_uring", 00:37:51.926 "conserve_cpu": true, 00:37:51.926 "filename": "/dev/nvme0n1", 00:37:51.926 "name": "xnvme_bdev" 00:37:51.926 }, 00:37:51.926 "method": "bdev_xnvme_create" 00:37:51.926 }, 00:37:51.926 { 00:37:51.926 "method": "bdev_wait_for_examine" 00:37:51.926 } 00:37:51.926 ] 00:37:51.926 } 00:37:51.926 ] 00:37:51.926 } 00:37:51.926 [2024-12-06 13:32:44.674871] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:37:51.926 [2024-12-06 13:32:44.675072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72798 ] 00:37:51.926 [2024-12-06 13:32:44.887811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.183 [2024-12-06 13:32:45.088943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:52.440 Running I/O for 5 seconds... 00:37:54.774 51840.00 IOPS, 202.50 MiB/s [2024-12-06T13:32:48.807Z] 51712.00 IOPS, 202.00 MiB/s [2024-12-06T13:32:49.739Z] 49386.67 IOPS, 192.92 MiB/s [2024-12-06T13:32:50.674Z] 47760.00 IOPS, 186.56 MiB/s 00:37:57.574 Latency(us) 00:37:57.574 [2024-12-06T13:32:50.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.574 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:37:57.574 xnvme_bdev : 5.00 47071.55 183.87 0.00 0.00 1355.59 819.20 4119.41 00:37:57.574 [2024-12-06T13:32:50.674Z] =================================================================================================================== 00:37:57.574 [2024-12-06T13:32:50.674Z] Total : 47071.55 183.87 0.00 0.00 1355.59 819.20 4119.41 00:37:58.952 13:32:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:37:58.952 13:32:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:37:58.952 13:32:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:37:58.952 13:32:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:37:58.952 13:32:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:58.952 { 00:37:58.952 "subsystems": [ 00:37:58.952 { 00:37:58.952 "subsystem": "bdev", 00:37:58.952 "config": [ 00:37:58.952 { 00:37:58.952 "params": { 00:37:58.952 "io_mechanism": "io_uring", 00:37:58.952 "conserve_cpu": true, 00:37:58.952 "filename": "/dev/nvme0n1", 00:37:58.952 "name": "xnvme_bdev" 00:37:58.952 }, 00:37:58.952 "method": "bdev_xnvme_create" 00:37:58.952 }, 00:37:58.952 { 00:37:58.952 "method": "bdev_wait_for_examine" 00:37:58.952 } 00:37:58.952 ] 00:37:58.952 } 00:37:58.952 ] 00:37:58.952 } 00:37:58.952 [2024-12-06 13:32:51.970879] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:37:58.952 [2024-12-06 13:32:51.971052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72879 ] 00:37:59.212 [2024-12-06 13:32:52.149655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.212 [2024-12-06 13:32:52.298874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:59.780 Running I/O for 5 seconds... 00:38:02.092 43264.00 IOPS, 169.00 MiB/s [2024-12-06T13:32:55.758Z] 42400.00 IOPS, 165.62 MiB/s [2024-12-06T13:32:57.131Z] 42944.00 IOPS, 167.75 MiB/s [2024-12-06T13:32:58.067Z] 42688.00 IOPS, 166.75 MiB/s [2024-12-06T13:32:58.067Z] 42368.00 IOPS, 165.50 MiB/s 00:38:04.967 Latency(us) 00:38:04.967 [2024-12-06T13:32:58.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.967 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:38:04.967 xnvme_bdev : 5.01 42329.18 165.35 0.00 0.00 1507.22 787.99 4712.35 00:38:04.967 [2024-12-06T13:32:58.067Z] =================================================================================================================== 00:38:04.967 [2024-12-06T13:32:58.067Z] Total : 42329.18 165.35 0.00 0.00 1507.22 787.99 4712.35 00:38:06.344 ************************************ 00:38:06.344 END TEST xnvme_bdevperf 00:38:06.344 ************************************ 00:38:06.344 00:38:06.344 real 0m14.532s 00:38:06.344 user 0m7.085s 00:38:06.344 sys 0m6.985s 00:38:06.344 13:32:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.344 13:32:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:06.344 13:32:59 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:38:06.344 13:32:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:06.344 13:32:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:06.344 13:32:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:06.344 ************************************ 00:38:06.344 START TEST xnvme_fio_plugin 00:38:06.344 ************************************ 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:06.344 { 00:38:06.344 "subsystems": [ 00:38:06.344 { 00:38:06.344 "subsystem": "bdev", 00:38:06.344 "config": [ 00:38:06.344 { 00:38:06.344 "params": { 00:38:06.344 "io_mechanism": "io_uring", 00:38:06.344 "conserve_cpu": true, 00:38:06.344 "filename": "/dev/nvme0n1", 00:38:06.344 "name": "xnvme_bdev" 00:38:06.344 }, 00:38:06.344 "method": "bdev_xnvme_create" 00:38:06.344 }, 00:38:06.344 { 00:38:06.344 "method": "bdev_wait_for_examine" 00:38:06.344 } 00:38:06.344 ] 00:38:06.344 } 00:38:06.344 ] 00:38:06.344 } 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:06.344 13:32:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:06.344 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:06.344 fio-3.35 00:38:06.344 Starting 1 thread 00:38:12.911 00:38:12.911 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73011: Fri Dec 6 13:33:05 2024 00:38:12.911 read: IOPS=44.3k, BW=173MiB/s (181MB/s)(865MiB/5001msec) 00:38:12.911 slat (nsec): min=2880, max=55773, avg=4250.27, stdev=1436.89 00:38:12.911 clat (usec): min=388, max=3220, avg=1276.21, stdev=177.28 00:38:12.911 lat (usec): min=391, max=3257, avg=1280.47, stdev=177.79 00:38:12.911 clat percentiles (usec): 00:38:12.911 | 1.00th=[ 979], 5.00th=[ 1045], 10.00th=[ 1090], 20.00th=[ 1139], 00:38:12.911 | 30.00th=[ 1188], 40.00th=[ 1221], 50.00th=[ 1254], 60.00th=[ 1287], 00:38:12.911 | 70.00th=[ 1336], 80.00th=[ 1385], 90.00th=[ 1467], 95.00th=[ 1582], 00:38:12.911 | 99.00th=[ 1909], 99.50th=[ 2024], 99.90th=[ 2442], 99.95th=[ 2671], 00:38:12.911 | 99.99th=[ 2999] 00:38:12.911 bw ( KiB/s): min=166904, max=190976, per=99.37%, avg=175918.44, stdev=7199.61, samples=9 00:38:12.911 iops : min=41726, max=47744, avg=43979.78, stdev=1799.87, samples=9 00:38:12.911 lat (usec) : 500=0.01%, 1000=1.88% 00:38:12.911 lat (msec) : 2=97.53%, 4=0.58% 00:38:12.911 cpu : usr=33.84%, sys=62.50%, ctx=11, majf=0, minf=762 00:38:12.911 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:38:12.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:12.911 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:38:12.911 issued rwts: total=221334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:12.911 latency : target=0, window=0, percentile=100.00%, depth=64 00:38:12.911 00:38:12.911 Run status group 0 (all jobs): 00:38:12.911 READ: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=865MiB (907MB), run=5001-5001msec 00:38:13.868 ----------------------------------------------------- 00:38:13.868 Suppressions used: 00:38:13.868 count bytes template 00:38:13.868 1 11 /usr/src/fio/parse.c 00:38:13.868 1 8 libtcmalloc_minimal.so 00:38:13.868 1 904 libcrypto.so 00:38:13.868 ----------------------------------------------------- 00:38:13.868 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:13.868 13:33:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:14.127 { 00:38:14.127 "subsystems": [ 00:38:14.127 { 00:38:14.127 "subsystem": "bdev", 00:38:14.127 "config": [ 00:38:14.127 { 00:38:14.127 "params": { 00:38:14.127 "io_mechanism": "io_uring", 00:38:14.127 "conserve_cpu": true, 00:38:14.127 "filename": "/dev/nvme0n1", 00:38:14.127 "name": "xnvme_bdev" 00:38:14.127 }, 00:38:14.127 "method": "bdev_xnvme_create" 00:38:14.127 }, 00:38:14.127 { 00:38:14.127 "method": "bdev_wait_for_examine" 00:38:14.127 } 00:38:14.127 ] 00:38:14.127 } 00:38:14.127 ] 00:38:14.127 } 00:38:14.127 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:14.127 fio-3.35 00:38:14.127 Starting 1 thread 00:38:20.709 00:38:20.709 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73108: Fri Dec 6 13:33:13 2024 00:38:20.709 write: IOPS=46.6k, BW=182MiB/s (191MB/s)(910MiB/5001msec); 0 zone resets 00:38:20.709 slat (usec): min=2, max=452, avg= 4.36, stdev= 2.73 00:38:20.709 clat (usec): min=654, max=4121, avg=1202.44, stdev=227.61 00:38:20.709 lat (usec): min=658, max=4131, avg=1206.80, stdev=228.55 00:38:20.709 clat percentiles (usec): 00:38:20.709 | 1.00th=[ 865], 5.00th=[ 930], 10.00th=[ 979], 20.00th=[ 1037], 00:38:20.709 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1172], 60.00th=[ 1205], 00:38:20.709 | 70.00th=[ 1254], 80.00th=[ 1319], 90.00th=[ 1434], 95.00th=[ 1614], 00:38:20.709 | 99.00th=[ 2040], 99.50th=[ 2245], 99.90th=[ 2704], 99.95th=[ 2835], 00:38:20.709 | 99.99th=[ 3916] 00:38:20.709 bw ( KiB/s): min=158914, max=216576, per=100.00%, avg=186580.00, stdev=20242.38, samples=9 00:38:20.709 iops : min=39728, max=54144, avg=46644.89, stdev=5060.70, samples=9 00:38:20.709 lat (usec) : 750=0.01%, 1000=13.00% 00:38:20.709 lat (msec) : 2=85.79%, 4=1.19%, 10=0.01% 00:38:20.709 cpu : usr=41.92%, sys=54.30%, ctx=48, majf=0, minf=763 00:38:20.709 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=24.9%, 32=50.1%, >=64=1.6% 00:38:20.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.710 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:38:20.710 issued rwts: total=0,232945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.710 latency : target=0, window=0, percentile=100.00%, depth=64 00:38:20.710 00:38:20.710 Run status group 0 (all jobs): 00:38:20.710 WRITE: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=910MiB (954MB), run=5001-5001msec 00:38:22.085 ----------------------------------------------------- 00:38:22.085 Suppressions used: 00:38:22.085 count bytes template 00:38:22.085 1 11 /usr/src/fio/parse.c 00:38:22.085 1 8 libtcmalloc_minimal.so 00:38:22.085 1 904 libcrypto.so 00:38:22.085 ----------------------------------------------------- 00:38:22.085 00:38:22.085 00:38:22.085 real 0m15.691s 00:38:22.085 user 0m8.188s 00:38:22.085 sys 0m6.849s 00:38:22.085 ************************************ 00:38:22.085 END TEST xnvme_fio_plugin 00:38:22.085 ************************************ 00:38:22.085 13:33:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:22.085 13:33:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:22.085 13:33:14 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:38:22.085 13:33:14 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:38:22.085 13:33:14 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:38:22.085 13:33:14 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:38:22.085 13:33:14 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:38:22.085 13:33:14 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:38:22.085 13:33:14 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:38:22.085 13:33:14 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:38:22.085 13:33:14 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:38:22.085 13:33:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:22.085 13:33:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:22.085 13:33:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:22.085 ************************************ 00:38:22.085 START TEST xnvme_rpc 00:38:22.085 ************************************ 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:38:22.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73200 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73200 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73200 ']' 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:22.085 13:33:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:22.085 [2024-12-06 13:33:15.063226] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:38:22.085 [2024-12-06 13:33:15.063729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73200 ] 00:38:22.343 [2024-12-06 13:33:15.263962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:22.343 [2024-12-06 13:33:15.414929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:23.718 xnvme_bdev 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:38:23.718 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73200 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73200 ']' 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73200 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73200 00:38:23.719 killing process with pid 73200 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73200' 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73200 00:38:23.719 13:33:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73200 00:38:26.999 00:38:26.999 real 0m4.602s 00:38:26.999 user 0m4.514s 00:38:26.999 sys 0m0.780s 00:38:26.999 13:33:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:26.999 13:33:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:26.999 ************************************ 00:38:26.999 END TEST xnvme_rpc 00:38:26.999 ************************************ 00:38:26.999 13:33:19 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:38:26.999 13:33:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:26.999 13:33:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:26.999 13:33:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:26.999 ************************************ 00:38:26.999 START TEST xnvme_bdevperf 00:38:26.999 ************************************ 00:38:26.999 13:33:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:38:26.999 13:33:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:38:26.999 13:33:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:38:26.999 13:33:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:26.999 13:33:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:38:26.999 13:33:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:26.999 13:33:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:26.999 13:33:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:26.999 { 00:38:26.999 "subsystems": [ 00:38:26.999 { 00:38:26.999 "subsystem": "bdev", 00:38:26.999 "config": [ 00:38:26.999 { 00:38:26.999 "params": { 00:38:26.999 "io_mechanism": "io_uring_cmd", 00:38:26.999 "conserve_cpu": false, 00:38:26.999 "filename": "/dev/ng0n1", 00:38:26.999 "name": "xnvme_bdev" 00:38:26.999 }, 00:38:26.999 "method": "bdev_xnvme_create" 00:38:26.999 }, 00:38:26.999 { 00:38:26.999 "method": "bdev_wait_for_examine" 00:38:26.999 } 00:38:26.999 ] 00:38:26.999 } 00:38:26.999 ] 00:38:26.999 } 00:38:26.999 [2024-12-06 13:33:19.696324] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:38:26.999 [2024-12-06 13:33:19.696554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73285 ] 00:38:26.999 [2024-12-06 13:33:19.896072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.999 [2024-12-06 13:33:20.048592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.566 Running I/O for 5 seconds... 00:38:29.432 54336.00 IOPS, 212.25 MiB/s [2024-12-06T13:33:23.909Z] 51328.00 IOPS, 200.50 MiB/s [2024-12-06T13:33:24.501Z] 50133.33 IOPS, 195.83 MiB/s [2024-12-06T13:33:25.875Z] 50288.00 IOPS, 196.44 MiB/s 00:38:32.775 Latency(us) 00:38:32.775 [2024-12-06T13:33:25.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.775 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:38:32.775 xnvme_bdev : 5.00 50188.77 196.05 0.00 0.00 1270.97 780.19 4056.99 00:38:32.775 [2024-12-06T13:33:25.875Z] =================================================================================================================== 00:38:32.775 [2024-12-06T13:33:25.875Z] Total : 50188.77 196.05 0.00 0.00 1270.97 780.19 4056.99 00:38:34.149 13:33:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:34.149 13:33:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:38:34.149 13:33:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:34.149 13:33:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:34.149 13:33:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:34.149 { 00:38:34.149 "subsystems": [ 00:38:34.149 { 00:38:34.149 "subsystem": "bdev", 00:38:34.149 "config": [ 00:38:34.149 { 00:38:34.149 "params": { 00:38:34.149 "io_mechanism": "io_uring_cmd", 00:38:34.149 "conserve_cpu": false, 00:38:34.149 "filename": "/dev/ng0n1", 00:38:34.149 "name": "xnvme_bdev" 00:38:34.149 }, 00:38:34.149 "method": "bdev_xnvme_create" 00:38:34.149 }, 00:38:34.149 { 00:38:34.149 "method": "bdev_wait_for_examine" 00:38:34.149 } 00:38:34.149 ] 00:38:34.149 } 00:38:34.149 ] 00:38:34.149 } 00:38:34.149 [2024-12-06 13:33:26.965898] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:38:34.149 [2024-12-06 13:33:26.966463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73365 ] 00:38:34.149 [2024-12-06 13:33:27.169599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.407 [2024-12-06 13:33:27.378166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.973 Running I/O for 5 seconds... 00:38:36.837 45760.00 IOPS, 178.75 MiB/s [2024-12-06T13:33:30.872Z] 46144.00 IOPS, 180.25 MiB/s [2024-12-06T13:33:32.245Z] 45952.00 IOPS, 179.50 MiB/s [2024-12-06T13:33:33.179Z] 45376.00 IOPS, 177.25 MiB/s [2024-12-06T13:33:33.179Z] 45107.20 IOPS, 176.20 MiB/s 00:38:40.079 Latency(us) 00:38:40.079 [2024-12-06T13:33:33.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.079 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:38:40.079 xnvme_bdev : 5.01 45074.86 176.07 0.00 0.00 1415.18 990.84 7115.34 00:38:40.079 [2024-12-06T13:33:33.179Z] =================================================================================================================== 00:38:40.079 [2024-12-06T13:33:33.179Z] Total : 45074.86 176.07 0.00 0.00 1415.18 990.84 7115.34 00:38:41.516 13:33:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:41.516 13:33:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:38:41.516 13:33:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:41.516 13:33:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:41.516 13:33:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:41.516 { 00:38:41.516 "subsystems": [ 00:38:41.516 { 00:38:41.516 "subsystem": "bdev", 00:38:41.517 "config": [ 00:38:41.517 { 00:38:41.517 "params": { 00:38:41.517 "io_mechanism": "io_uring_cmd", 00:38:41.517 "conserve_cpu": false, 00:38:41.517 "filename": "/dev/ng0n1", 00:38:41.517 "name": "xnvme_bdev" 00:38:41.517 }, 00:38:41.517 "method": "bdev_xnvme_create" 00:38:41.517 }, 00:38:41.517 { 00:38:41.517 "method": "bdev_wait_for_examine" 00:38:41.517 } 00:38:41.517 ] 00:38:41.517 } 00:38:41.517 ] 00:38:41.517 } 00:38:41.517 [2024-12-06 13:33:34.421348] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:38:41.517 [2024-12-06 13:33:34.421850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73451 ] 00:38:41.773 [2024-12-06 13:33:34.617733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.773 [2024-12-06 13:33:34.767338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.337 Running I/O for 5 seconds... 00:38:44.344 93568.00 IOPS, 365.50 MiB/s [2024-12-06T13:33:38.378Z] 93248.00 IOPS, 364.25 MiB/s [2024-12-06T13:33:39.307Z] 90176.00 IOPS, 352.25 MiB/s [2024-12-06T13:33:40.240Z] 90464.00 IOPS, 353.38 MiB/s 00:38:47.140 Latency(us) 00:38:47.140 [2024-12-06T13:33:40.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.140 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:38:47.140 xnvme_bdev : 5.00 90183.32 352.28 0.00 0.00 706.62 431.06 2481.01 00:38:47.140 [2024-12-06T13:33:40.240Z] =================================================================================================================== 00:38:47.140 [2024-12-06T13:33:40.240Z] Total : 90183.32 352.28 0.00 0.00 706.62 431.06 2481.01 00:38:48.514 13:33:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:48.514 13:33:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:38:48.514 13:33:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:38:48.514 13:33:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:38:48.514 13:33:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:48.514 { 00:38:48.514 "subsystems": [ 00:38:48.514 { 00:38:48.514 "subsystem": "bdev", 00:38:48.514 "config": [ 00:38:48.514 { 00:38:48.514 "params": { 00:38:48.514 "io_mechanism": "io_uring_cmd", 00:38:48.514 "conserve_cpu": false, 00:38:48.514 "filename": "/dev/ng0n1", 00:38:48.514 "name": "xnvme_bdev" 00:38:48.514 }, 00:38:48.514 "method": "bdev_xnvme_create" 00:38:48.514 }, 00:38:48.514 { 00:38:48.514 "method": "bdev_wait_for_examine" 00:38:48.514 } 00:38:48.514 ] 00:38:48.514 } 00:38:48.514 ] 00:38:48.514 } 00:38:48.514 [2024-12-06 13:33:41.609194] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:38:48.515 [2024-12-06 13:33:41.609628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73531 ] 00:38:48.774 [2024-12-06 13:33:41.813390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.032 [2024-12-06 13:33:42.017096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.599 Running I/O for 5 seconds... 00:38:51.496 32023.00 IOPS, 125.09 MiB/s [2024-12-06T13:33:45.536Z] 22417.00 IOPS, 87.57 MiB/s [2024-12-06T13:33:46.906Z] 28441.33 IOPS, 111.10 MiB/s [2024-12-06T13:33:47.482Z] 33091.00 IOPS, 129.26 MiB/s [2024-12-06T13:33:47.482Z] 36448.00 IOPS, 142.38 MiB/s 00:38:54.382 Latency(us) 00:38:54.382 [2024-12-06T13:33:47.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:54.382 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:38:54.382 xnvme_bdev : 5.00 36432.12 142.31 0.00 0.00 1752.16 76.07 20472.20 00:38:54.382 [2024-12-06T13:33:47.482Z] =================================================================================================================== 00:38:54.382 [2024-12-06T13:33:47.482Z] Total : 36432.12 142.31 0.00 0.00 1752.16 76.07 20472.20 00:38:56.341 00:38:56.341 real 0m29.420s 00:38:56.341 user 0m15.890s 00:38:56.341 sys 0m13.107s 00:38:56.341 13:33:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:56.341 13:33:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:56.341 ************************************ 00:38:56.341 END TEST xnvme_bdevperf 00:38:56.341 ************************************ 00:38:56.341 13:33:49 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:38:56.341 13:33:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:56.341 13:33:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:56.341 13:33:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:38:56.341 ************************************ 00:38:56.341 START TEST xnvme_fio_plugin 00:38:56.341 ************************************ 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:56.341 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:38:56.342 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:56.342 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:56.342 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:56.342 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:38:56.342 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:56.342 13:33:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:38:56.342 { 00:38:56.342 "subsystems": [ 00:38:56.342 { 00:38:56.342 "subsystem": "bdev", 00:38:56.342 "config": [ 00:38:56.342 { 00:38:56.342 "params": { 00:38:56.342 "io_mechanism": "io_uring_cmd", 00:38:56.342 "conserve_cpu": false, 00:38:56.342 "filename": "/dev/ng0n1", 00:38:56.342 "name": "xnvme_bdev" 00:38:56.342 }, 00:38:56.342 "method": "bdev_xnvme_create" 00:38:56.342 }, 00:38:56.342 { 00:38:56.342 "method": "bdev_wait_for_examine" 00:38:56.342 } 00:38:56.342 ] 00:38:56.342 } 00:38:56.342 ] 00:38:56.342 } 00:38:56.342 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:38:56.342 fio-3.35 00:38:56.342 Starting 1 thread 00:39:02.895 00:39:02.895 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73659: Fri Dec 6 13:33:55 2024 00:39:02.895 read: IOPS=49.8k, BW=194MiB/s (204MB/s)(972MiB/5001msec) 00:39:02.895 slat (nsec): min=2724, max=64826, avg=4118.93, stdev=1332.48 00:39:02.895 clat (usec): min=747, max=8671, avg=1122.67, stdev=156.45 00:39:02.895 lat (usec): min=751, max=8675, avg=1126.78, stdev=156.94 00:39:02.895 clat percentiles (usec): 00:39:02.895 | 1.00th=[ 848], 5.00th=[ 914], 10.00th=[ 955], 20.00th=[ 1004], 00:39:02.895 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:39:02.895 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1287], 95.00th=[ 1352], 00:39:02.895 | 99.00th=[ 1663], 99.50th=[ 1827], 99.90th=[ 2245], 99.95th=[ 2442], 00:39:02.895 | 99.99th=[ 2933] 00:39:02.895 bw ( KiB/s): min=185856, max=214528, per=100.00%, avg=199223.11, stdev=7951.87, samples=9 00:39:02.895 iops : min=46464, max=53632, avg=49806.22, stdev=1987.93, samples=9 00:39:02.895 lat (usec) : 750=0.01%, 1000=19.26% 00:39:02.895 lat (msec) : 2=80.55%, 4=0.19%, 10=0.01% 00:39:02.895 cpu : usr=36.24%, sys=62.90%, ctx=8, majf=0, minf=762 00:39:02.895 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:39:02.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:02.895 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:39:02.895 issued rwts: total=248861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:02.895 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:02.895 00:39:02.895 Run status group 0 (all jobs): 00:39:02.895 READ: bw=194MiB/s (204MB/s), 194MiB/s-194MiB/s (204MB/s-204MB/s), io=972MiB (1019MB), run=5001-5001msec 00:39:03.828 ----------------------------------------------------- 00:39:03.828 Suppressions used: 00:39:03.828 count bytes template 00:39:03.828 1 11 /usr/src/fio/parse.c 00:39:03.828 1 8 libtcmalloc_minimal.so 00:39:03.828 1 904 libcrypto.so 00:39:03.828 ----------------------------------------------------- 00:39:03.828 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:03.828 13:33:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:03.828 { 00:39:03.828 "subsystems": [ 00:39:03.828 { 00:39:03.828 "subsystem": "bdev", 00:39:03.828 "config": [ 00:39:03.828 { 00:39:03.828 "params": { 00:39:03.828 "io_mechanism": "io_uring_cmd", 00:39:03.828 "conserve_cpu": false, 00:39:03.828 "filename": "/dev/ng0n1", 00:39:03.828 "name": "xnvme_bdev" 00:39:03.828 }, 00:39:03.828 "method": "bdev_xnvme_create" 00:39:03.828 }, 00:39:03.828 { 00:39:03.828 "method": "bdev_wait_for_examine" 00:39:03.828 } 00:39:03.828 ] 00:39:03.828 } 00:39:03.828 ] 00:39:03.828 } 00:39:04.088 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:39:04.088 fio-3.35 00:39:04.088 Starting 1 thread 00:39:10.655 00:39:10.655 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73757: Fri Dec 6 13:34:03 2024 00:39:10.655 write: IOPS=46.7k, BW=182MiB/s (191MB/s)(912MiB/5001msec); 0 zone resets 00:39:10.655 slat (usec): min=2, max=104, avg= 4.69, stdev= 1.85 00:39:10.655 clat (usec): min=795, max=2675, avg=1187.44, stdev=198.12 00:39:10.655 lat (usec): min=798, max=2682, avg=1192.13, stdev=198.89 00:39:10.655 clat percentiles (usec): 00:39:10.655 | 1.00th=[ 898], 5.00th=[ 947], 10.00th=[ 988], 20.00th=[ 1037], 00:39:10.655 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:39:10.655 | 70.00th=[ 1237], 80.00th=[ 1303], 90.00th=[ 1418], 95.00th=[ 1582], 00:39:10.655 | 99.00th=[ 1893], 99.50th=[ 1991], 99.90th=[ 2343], 99.95th=[ 2442], 00:39:10.655 | 99.99th=[ 2606] 00:39:10.655 bw ( KiB/s): min=174080, max=200192, per=98.92%, avg=184619.22, stdev=9911.21, samples=9 00:39:10.655 iops : min=43520, max=50048, avg=46155.00, stdev=2477.86, samples=9 00:39:10.655 lat (usec) : 1000=12.84% 00:39:10.655 lat (msec) : 2=86.68%, 4=0.47% 00:39:10.655 cpu : usr=37.70%, sys=61.28%, ctx=7, majf=0, minf=763 00:39:10.655 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:39:10.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:10.655 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:39:10.655 issued rwts: total=0,233344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:10.655 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:10.655 00:39:10.655 Run status group 0 (all jobs): 00:39:10.655 WRITE: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=912MiB (956MB), run=5001-5001msec 00:39:11.588 ----------------------------------------------------- 00:39:11.588 Suppressions used: 00:39:11.588 count bytes template 00:39:11.588 1 11 /usr/src/fio/parse.c 00:39:11.588 1 8 libtcmalloc_minimal.so 00:39:11.588 1 904 libcrypto.so 00:39:11.588 ----------------------------------------------------- 00:39:11.588 00:39:11.845 00:39:11.845 real 0m15.655s 00:39:11.845 user 0m8.066s 00:39:11.845 sys 0m7.208s 00:39:11.845 13:34:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:11.845 ************************************ 00:39:11.845 END TEST xnvme_fio_plugin 00:39:11.845 ************************************ 00:39:11.845 13:34:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:39:11.845 13:34:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:39:11.845 13:34:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:39:11.845 13:34:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:39:11.845 13:34:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:39:11.845 13:34:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:11.845 13:34:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:11.845 13:34:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:11.845 ************************************ 00:39:11.845 START TEST xnvme_rpc 00:39:11.845 ************************************ 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73842 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73842 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73842 ']' 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:11.845 13:34:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.846 13:34:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:11.846 13:34:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:11.846 [2024-12-06 13:34:04.921906] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:39:11.846 [2024-12-06 13:34:04.922105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73842 ] 00:39:12.103 [2024-12-06 13:34:05.134381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.360 [2024-12-06 13:34:05.332726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:13.765 xnvme_bdev 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73842 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73842 ']' 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73842 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:39:13.765 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:13.766 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73842 00:39:13.766 killing process with pid 73842 00:39:13.766 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:13.766 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:13.766 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73842' 00:39:13.766 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73842 00:39:13.766 13:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73842 00:39:17.051 00:39:17.051 real 0m4.707s 00:39:17.051 user 0m4.628s 00:39:17.051 sys 0m0.778s 00:39:17.051 13:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:17.051 13:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:17.051 ************************************ 00:39:17.051 END TEST xnvme_rpc 00:39:17.051 ************************************ 00:39:17.051 13:34:09 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:39:17.051 13:34:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:17.051 13:34:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:17.051 13:34:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:17.051 ************************************ 00:39:17.051 START TEST xnvme_bdevperf 00:39:17.051 ************************************ 00:39:17.051 13:34:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:39:17.051 13:34:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:39:17.051 13:34:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:39:17.051 13:34:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:17.051 13:34:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:39:17.051 13:34:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:39:17.051 13:34:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:39:17.051 13:34:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:17.051 { 00:39:17.051 "subsystems": [ 00:39:17.051 { 00:39:17.051 "subsystem": "bdev", 00:39:17.051 "config": [ 00:39:17.051 { 00:39:17.051 "params": { 00:39:17.051 "io_mechanism": "io_uring_cmd", 00:39:17.051 "conserve_cpu": true, 00:39:17.051 "filename": "/dev/ng0n1", 00:39:17.051 "name": "xnvme_bdev" 00:39:17.051 }, 00:39:17.051 "method": "bdev_xnvme_create" 00:39:17.051 }, 00:39:17.051 { 00:39:17.051 "method": "bdev_wait_for_examine" 00:39:17.051 } 00:39:17.051 ] 00:39:17.051 } 00:39:17.051 ] 00:39:17.051 } 00:39:17.051 [2024-12-06 13:34:09.678195] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:39:17.051 [2024-12-06 13:34:09.678426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73933 ] 00:39:17.051 [2024-12-06 13:34:09.878799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.051 [2024-12-06 13:34:10.028572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.619 Running I/O for 5 seconds... 00:39:19.486 50240.00 IOPS, 196.25 MiB/s [2024-12-06T13:34:13.519Z] 52287.50 IOPS, 204.25 MiB/s [2024-12-06T13:34:14.496Z] 53247.33 IOPS, 208.00 MiB/s [2024-12-06T13:34:15.872Z] 53727.50 IOPS, 209.87 MiB/s 00:39:22.772 Latency(us) 00:39:22.772 [2024-12-06T13:34:15.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.772 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:39:22.772 xnvme_bdev : 5.00 52834.86 206.39 0.00 0.00 1207.46 799.70 3963.37 00:39:22.772 [2024-12-06T13:34:15.872Z] =================================================================================================================== 00:39:22.772 [2024-12-06T13:34:15.872Z] Total : 52834.86 206.39 0.00 0.00 1207.46 799.70 3963.37 00:39:24.150 13:34:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:24.150 13:34:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:39:24.150 13:34:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:39:24.150 13:34:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:39:24.150 13:34:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:24.150 { 00:39:24.150 "subsystems": [ 00:39:24.150 { 00:39:24.150 "subsystem": "bdev", 00:39:24.150 "config": [ 00:39:24.150 { 00:39:24.150 "params": { 00:39:24.150 "io_mechanism": "io_uring_cmd", 00:39:24.150 "conserve_cpu": true, 00:39:24.150 "filename": "/dev/ng0n1", 00:39:24.150 "name": "xnvme_bdev" 00:39:24.150 }, 00:39:24.150 "method": "bdev_xnvme_create" 00:39:24.150 }, 00:39:24.150 { 00:39:24.150 "method": "bdev_wait_for_examine" 00:39:24.150 } 00:39:24.150 ] 00:39:24.150 } 00:39:24.150 ] 00:39:24.150 } 00:39:24.150 [2024-12-06 13:34:16.957138] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:39:24.150 [2024-12-06 13:34:16.957292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74013 ] 00:39:24.150 [2024-12-06 13:34:17.135592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.408 [2024-12-06 13:34:17.285745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.666 Running I/O for 5 seconds... 00:39:26.990 44416.00 IOPS, 173.50 MiB/s [2024-12-06T13:34:21.079Z] 45184.00 IOPS, 176.50 MiB/s [2024-12-06T13:34:22.012Z] 45653.33 IOPS, 178.33 MiB/s [2024-12-06T13:34:22.944Z] 45600.00 IOPS, 178.12 MiB/s 00:39:29.844 Latency(us) 00:39:29.844 [2024-12-06T13:34:22.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:29.844 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:39:29.844 xnvme_bdev : 5.00 45501.35 177.74 0.00 0.00 1401.56 764.59 7645.87 00:39:29.844 [2024-12-06T13:34:22.944Z] =================================================================================================================== 00:39:29.844 [2024-12-06T13:34:22.944Z] Total : 45501.35 177.74 0.00 0.00 1401.56 764.59 7645.87 00:39:31.217 13:34:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:31.217 13:34:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:39:31.217 13:34:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:39:31.217 13:34:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:39:31.217 13:34:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:31.217 { 00:39:31.217 "subsystems": [ 00:39:31.217 { 00:39:31.217 "subsystem": "bdev", 00:39:31.217 "config": [ 00:39:31.217 { 00:39:31.217 "params": { 00:39:31.217 "io_mechanism": "io_uring_cmd", 00:39:31.217 "conserve_cpu": true, 00:39:31.217 "filename": "/dev/ng0n1", 00:39:31.217 "name": "xnvme_bdev" 00:39:31.217 }, 00:39:31.217 "method": "bdev_xnvme_create" 00:39:31.217 }, 00:39:31.217 { 00:39:31.217 "method": "bdev_wait_for_examine" 00:39:31.217 } 00:39:31.217 ] 00:39:31.217 } 00:39:31.217 ] 00:39:31.217 } 00:39:31.217 [2024-12-06 13:34:24.257934] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:39:31.217 [2024-12-06 13:34:24.258138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74093 ] 00:39:31.475 [2024-12-06 13:34:24.440799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:31.733 [2024-12-06 13:34:24.590652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:31.991 Running I/O for 5 seconds... 00:39:34.308 96192.00 IOPS, 375.75 MiB/s [2024-12-06T13:34:28.344Z] 95360.00 IOPS, 372.50 MiB/s [2024-12-06T13:34:29.276Z] 94250.67 IOPS, 368.17 MiB/s [2024-12-06T13:34:30.210Z] 93008.00 IOPS, 363.31 MiB/s [2024-12-06T13:34:30.210Z] 92710.40 IOPS, 362.15 MiB/s 00:39:37.110 Latency(us) 00:39:37.110 [2024-12-06T13:34:30.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:37.110 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:39:37.110 xnvme_bdev : 5.00 92683.96 362.05 0.00 0.00 687.55 433.01 2543.42 00:39:37.110 [2024-12-06T13:34:30.210Z] =================================================================================================================== 00:39:37.110 [2024-12-06T13:34:30.210Z] Total : 92683.96 362.05 0.00 0.00 687.55 433.01 2543.42 00:39:38.482 13:34:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:38.482 13:34:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:39:38.482 13:34:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:39:38.482 13:34:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:39:38.482 13:34:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:38.482 { 00:39:38.482 "subsystems": [ 00:39:38.482 { 00:39:38.482 "subsystem": "bdev", 00:39:38.482 "config": [ 00:39:38.482 { 00:39:38.482 "params": { 00:39:38.482 "io_mechanism": "io_uring_cmd", 00:39:38.482 "conserve_cpu": true, 00:39:38.482 "filename": "/dev/ng0n1", 00:39:38.482 "name": "xnvme_bdev" 00:39:38.482 }, 00:39:38.482 "method": "bdev_xnvme_create" 00:39:38.482 }, 00:39:38.482 { 00:39:38.482 "method": "bdev_wait_for_examine" 00:39:38.482 } 00:39:38.482 ] 00:39:38.482 } 00:39:38.482 ] 00:39:38.482 } 00:39:38.482 [2024-12-06 13:34:31.487343] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:39:38.482 [2024-12-06 13:34:31.487574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74173 ] 00:39:38.740 [2024-12-06 13:34:31.673460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.740 [2024-12-06 13:34:31.836102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.357 Running I/O for 5 seconds... 00:39:41.674 43904.00 IOPS, 171.50 MiB/s [2024-12-06T13:34:35.338Z] 43485.50 IOPS, 169.87 MiB/s [2024-12-06T13:34:36.708Z] 43135.67 IOPS, 168.50 MiB/s [2024-12-06T13:34:37.641Z] 43385.25 IOPS, 169.47 MiB/s [2024-12-06T13:34:37.641Z] 43572.40 IOPS, 170.20 MiB/s 00:39:44.541 Latency(us) 00:39:44.541 [2024-12-06T13:34:37.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:44.541 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:39:44.541 xnvme_bdev : 5.01 43516.89 169.99 0.00 0.00 1464.12 91.67 15728.64 00:39:44.541 [2024-12-06T13:34:37.641Z] =================================================================================================================== 00:39:44.541 [2024-12-06T13:34:37.641Z] Total : 43516.89 169.99 0.00 0.00 1464.12 91.67 15728.64 00:39:45.477 00:39:45.477 real 0m29.012s 00:39:45.477 user 0m16.800s 00:39:45.477 sys 0m10.182s 00:39:45.477 13:34:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:45.477 13:34:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:45.477 ************************************ 00:39:45.477 END TEST xnvme_bdevperf 00:39:45.477 ************************************ 00:39:45.735 13:34:38 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:39:45.735 13:34:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:45.735 13:34:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:45.735 13:34:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:39:45.735 ************************************ 00:39:45.735 START TEST xnvme_fio_plugin 00:39:45.735 ************************************ 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:45.735 13:34:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:45.735 { 00:39:45.735 "subsystems": [ 00:39:45.735 { 00:39:45.735 "subsystem": "bdev", 00:39:45.735 "config": [ 00:39:45.735 { 00:39:45.735 "params": { 00:39:45.735 "io_mechanism": "io_uring_cmd", 00:39:45.735 "conserve_cpu": true, 00:39:45.735 "filename": "/dev/ng0n1", 00:39:45.735 "name": "xnvme_bdev" 00:39:45.735 }, 00:39:45.735 "method": "bdev_xnvme_create" 00:39:45.735 }, 00:39:45.735 { 00:39:45.735 "method": "bdev_wait_for_examine" 00:39:45.735 } 00:39:45.735 ] 00:39:45.735 } 00:39:45.735 ] 00:39:45.735 } 00:39:45.994 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:39:45.994 fio-3.35 00:39:45.994 Starting 1 thread 00:39:52.587 00:39:52.587 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74297: Fri Dec 6 13:34:44 2024 00:39:52.587 read: IOPS=48.5k, BW=190MiB/s (199MB/s)(948MiB/5001msec) 00:39:52.587 slat (usec): min=2, max=236, avg= 3.89, stdev= 1.18 00:39:52.587 clat (usec): min=765, max=2989, avg=1165.86, stdev=154.72 00:39:52.587 lat (usec): min=769, max=2995, avg=1169.74, stdev=154.97 00:39:52.587 clat percentiles (usec): 00:39:52.587 | 1.00th=[ 865], 5.00th=[ 938], 10.00th=[ 988], 20.00th=[ 1045], 00:39:52.587 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:39:52.587 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1336], 95.00th=[ 1385], 00:39:52.587 | 99.00th=[ 1713], 99.50th=[ 1827], 99.90th=[ 2057], 99.95th=[ 2540], 00:39:52.587 | 99.99th=[ 2868] 00:39:52.587 bw ( KiB/s): min=179712, max=220160, per=99.73%, avg=193592.89, stdev=12545.16, samples=9 00:39:52.587 iops : min=44928, max=55040, avg=48398.22, stdev=3136.29, samples=9 00:39:52.587 lat (usec) : 1000=11.35% 00:39:52.587 lat (msec) : 2=88.52%, 4=0.13% 00:39:52.587 cpu : usr=38.63%, sys=58.93%, ctx=11, majf=0, minf=762 00:39:52.587 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:39:52.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.587 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:39:52.587 issued rwts: total=242688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:52.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:39:52.587 00:39:52.587 Run status group 0 (all jobs): 00:39:52.587 READ: bw=190MiB/s (199MB/s), 190MiB/s-190MiB/s (199MB/s-199MB/s), io=948MiB (994MB), run=5001-5001msec 00:39:53.549 ----------------------------------------------------- 00:39:53.549 Suppressions used: 00:39:53.549 count bytes template 00:39:53.549 1 11 /usr/src/fio/parse.c 00:39:53.549 1 8 libtcmalloc_minimal.so 00:39:53.549 1 904 libcrypto.so 00:39:53.549 ----------------------------------------------------- 00:39:53.549 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:53.549 13:34:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:39:53.549 { 00:39:53.549 "subsystems": [ 00:39:53.549 { 00:39:53.549 "subsystem": "bdev", 00:39:53.549 "config": [ 00:39:53.549 { 00:39:53.549 "params": { 00:39:53.549 "io_mechanism": "io_uring_cmd", 00:39:53.549 "conserve_cpu": true, 00:39:53.549 "filename": "/dev/ng0n1", 00:39:53.549 "name": "xnvme_bdev" 00:39:53.549 }, 00:39:53.549 "method": "bdev_xnvme_create" 00:39:53.549 }, 00:39:53.549 { 00:39:53.549 "method": "bdev_wait_for_examine" 00:39:53.549 } 00:39:53.549 ] 00:39:53.549 } 00:39:53.549 ] 00:39:53.549 } 00:39:53.549 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:39:53.549 fio-3.35 00:39:53.549 Starting 1 thread 00:40:00.109 00:40:00.109 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74399: Fri Dec 6 13:34:52 2024 00:40:00.109 write: IOPS=46.2k, BW=180MiB/s (189MB/s)(902MiB/5001msec); 0 zone resets 00:40:00.109 slat (usec): min=2, max=437, avg= 4.60, stdev= 2.88 00:40:00.109 clat (usec): min=116, max=11623, avg=1205.00, stdev=284.81 00:40:00.109 lat (usec): min=121, max=11628, avg=1209.60, stdev=285.71 00:40:00.109 clat percentiles (usec): 00:40:00.109 | 1.00th=[ 848], 5.00th=[ 930], 10.00th=[ 979], 20.00th=[ 1029], 00:40:00.109 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1156], 60.00th=[ 1205], 00:40:00.109 | 70.00th=[ 1254], 80.00th=[ 1319], 90.00th=[ 1500], 95.00th=[ 1713], 00:40:00.109 | 99.00th=[ 1991], 99.50th=[ 2147], 99.90th=[ 3163], 99.95th=[ 3785], 00:40:00.109 | 99.99th=[10159] 00:40:00.109 bw ( KiB/s): min=172032, max=204288, per=100.00%, avg=187847.11, stdev=10106.15, samples=9 00:40:00.109 iops : min=43008, max=51072, avg=46961.78, stdev=2526.54, samples=9 00:40:00.109 lat (usec) : 250=0.01%, 500=0.03%, 750=0.09%, 1000=14.05% 00:40:00.109 lat (msec) : 2=84.89%, 4=0.90%, 10=0.03%, 20=0.01% 00:40:00.109 cpu : usr=46.52%, sys=50.06%, ctx=12, majf=0, minf=763 00:40:00.109 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.3%, >=64=1.6% 00:40:00.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:00.109 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:40:00.109 issued rwts: total=0,230814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:00.109 latency : target=0, window=0, percentile=100.00%, depth=64 00:40:00.110 00:40:00.110 Run status group 0 (all jobs): 00:40:00.110 WRITE: bw=180MiB/s (189MB/s), 180MiB/s-180MiB/s (189MB/s-189MB/s), io=902MiB (945MB), run=5001-5001msec 00:40:01.046 ----------------------------------------------------- 00:40:01.046 Suppressions used: 00:40:01.046 count bytes template 00:40:01.046 1 11 /usr/src/fio/parse.c 00:40:01.046 1 8 libtcmalloc_minimal.so 00:40:01.046 1 904 libcrypto.so 00:40:01.046 ----------------------------------------------------- 00:40:01.046 00:40:01.305 00:40:01.305 real 0m15.566s 00:40:01.305 user 0m8.670s 00:40:01.305 sys 0m6.339s 00:40:01.305 13:34:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:01.305 ************************************ 00:40:01.305 END TEST xnvme_fio_plugin 00:40:01.305 ************************************ 00:40:01.305 13:34:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:40:01.305 13:34:54 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73842 00:40:01.305 13:34:54 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73842 ']' 00:40:01.305 13:34:54 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73842 00:40:01.305 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73842) - No such process 00:40:01.305 Process with pid 73842 is not found 00:40:01.305 13:34:54 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73842 is not found' 00:40:01.305 00:40:01.305 real 4m8.197s 00:40:01.305 user 2m13.863s 00:40:01.305 sys 1m36.891s 00:40:01.305 13:34:54 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:01.305 13:34:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:01.305 ************************************ 00:40:01.305 END TEST nvme_xnvme 00:40:01.305 ************************************ 00:40:01.305 13:34:54 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:40:01.305 13:34:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:01.305 13:34:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:01.305 13:34:54 -- common/autotest_common.sh@10 -- # set +x 00:40:01.305 ************************************ 00:40:01.305 START TEST blockdev_xnvme 00:40:01.305 ************************************ 00:40:01.305 13:34:54 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:40:01.305 * Looking for test storage... 00:40:01.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:40:01.305 13:34:54 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:01.305 13:34:54 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:40:01.305 13:34:54 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:01.565 13:34:54 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:01.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.565 --rc genhtml_branch_coverage=1 00:40:01.565 --rc genhtml_function_coverage=1 00:40:01.565 --rc genhtml_legend=1 00:40:01.565 --rc geninfo_all_blocks=1 00:40:01.565 --rc geninfo_unexecuted_blocks=1 00:40:01.565 00:40:01.565 ' 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:01.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.565 --rc genhtml_branch_coverage=1 00:40:01.565 --rc genhtml_function_coverage=1 00:40:01.565 --rc genhtml_legend=1 00:40:01.565 --rc geninfo_all_blocks=1 00:40:01.565 --rc geninfo_unexecuted_blocks=1 00:40:01.565 00:40:01.565 ' 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:01.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.565 --rc genhtml_branch_coverage=1 00:40:01.565 --rc genhtml_function_coverage=1 00:40:01.565 --rc genhtml_legend=1 00:40:01.565 --rc geninfo_all_blocks=1 00:40:01.565 --rc geninfo_unexecuted_blocks=1 00:40:01.565 00:40:01.565 ' 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:01.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.565 --rc genhtml_branch_coverage=1 00:40:01.565 --rc genhtml_function_coverage=1 00:40:01.565 --rc genhtml_legend=1 00:40:01.565 --rc geninfo_all_blocks=1 00:40:01.565 --rc geninfo_unexecuted_blocks=1 00:40:01.565 00:40:01.565 ' 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74539 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:40:01.565 13:34:54 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74539 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74539 ']' 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:01.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:01.565 13:34:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:01.565 [2024-12-06 13:34:54.650467] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:40:01.565 [2024-12-06 13:34:54.650618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74539 ] 00:40:01.824 [2024-12-06 13:34:54.829240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.083 [2024-12-06 13:34:54.977949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:03.016 13:34:56 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:03.016 13:34:56 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:40:03.016 13:34:56 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:40:03.016 13:34:56 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:40:03.016 13:34:56 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:40:03.016 13:34:56 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:40:03.016 13:34:56 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:03.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:04.178 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:40:04.178 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:40:04.437 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:40:04.437 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:40:04.437 13:34:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:40:04.437 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:40:04.438 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:40:04.438 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:04.438 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:40:04.438 nvme0n1 00:40:04.438 nvme0n2 00:40:04.438 nvme0n3 00:40:04.438 nvme1n1 00:40:04.438 nvme2n1 00:40:04.438 nvme3n1 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.438 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.438 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:40:04.438 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.438 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.438 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.438 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:40:04.438 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:40:04.438 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.438 13:34:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:04.697 13:34:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.697 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:40:04.697 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:40:04.698 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7168a055-0cf5-4284-9f38-8a133b5104f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7168a055-0cf5-4284-9f38-8a133b5104f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "6fe39b50-0eb7-437d-b574-d10d85478e29"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6fe39b50-0eb7-437d-b574-d10d85478e29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "859a0b7f-c9e9-48cb-8d80-5bd3f64a1602"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "859a0b7f-c9e9-48cb-8d80-5bd3f64a1602",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "adeefa6e-d0f6-4d55-8bde-d323f4e1e18d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "adeefa6e-d0f6-4d55-8bde-d323f4e1e18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "5622778c-e430-42eb-b3fd-ccbc22be5d5c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5622778c-e430-42eb-b3fd-ccbc22be5d5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "004fba90-28b9-43d7-96af-5d99db82a39c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "004fba90-28b9-43d7-96af-5d99db82a39c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:40:04.698 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:40:04.698 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:40:04.698 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:40:04.698 13:34:57 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74539 00:40:04.698 13:34:57 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74539 ']' 00:40:04.698 13:34:57 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74539 00:40:04.698 13:34:57 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:40:04.698 13:34:57 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:04.698 13:34:57 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74539 00:40:04.698 13:34:57 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:04.698 13:34:57 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:04.698 killing process with pid 74539 00:40:04.698 13:34:57 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74539' 00:40:04.698 13:34:57 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74539 00:40:04.698 13:34:57 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74539 00:40:07.984 13:35:00 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:07.985 13:35:00 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:40:07.985 13:35:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:40:07.985 13:35:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:07.985 13:35:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:07.985 ************************************ 00:40:07.985 START TEST bdev_hello_world 00:40:07.985 ************************************ 00:40:07.985 13:35:00 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:40:07.985 [2024-12-06 13:35:00.567997] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:40:07.985 [2024-12-06 13:35:00.568207] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74833 ] 00:40:07.985 [2024-12-06 13:35:00.760936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.985 [2024-12-06 13:35:00.912815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.551 [2024-12-06 13:35:01.444707] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:40:08.551 [2024-12-06 13:35:01.444774] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:40:08.551 [2024-12-06 13:35:01.444794] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:40:08.551 [2024-12-06 13:35:01.447434] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:40:08.551 [2024-12-06 13:35:01.447899] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:40:08.551 [2024-12-06 13:35:01.447932] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:40:08.551 [2024-12-06 13:35:01.448166] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:40:08.551 00:40:08.551 [2024-12-06 13:35:01.448192] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:40:09.928 ************************************ 00:40:09.928 END TEST bdev_hello_world 00:40:09.928 ************************************ 00:40:09.928 00:40:09.928 real 0m2.310s 00:40:09.928 user 0m1.833s 00:40:09.928 sys 0m0.359s 00:40:09.928 13:35:02 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:09.928 13:35:02 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:40:09.928 13:35:02 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:40:09.928 13:35:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:09.928 13:35:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:09.928 13:35:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:09.928 ************************************ 00:40:09.928 START TEST bdev_bounds 00:40:09.928 ************************************ 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74882 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:40:09.928 Process bdevio pid: 74882 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74882' 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74882 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74882 ']' 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:09.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:09.928 13:35:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:40:09.928 [2024-12-06 13:35:02.938487] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:40:09.928 [2024-12-06 13:35:02.939207] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74882 ] 00:40:10.185 [2024-12-06 13:35:03.141356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:10.443 [2024-12-06 13:35:03.298668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:10.443 [2024-12-06 13:35:03.298846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.443 [2024-12-06 13:35:03.298879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:11.035 13:35:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:11.035 13:35:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:40:11.035 13:35:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:40:11.035 I/O targets: 00:40:11.035 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:40:11.035 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:40:11.035 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:40:11.035 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:40:11.035 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:40:11.035 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:40:11.035 00:40:11.035 00:40:11.035 CUnit - A unit testing framework for C - Version 2.1-3 00:40:11.035 http://cunit.sourceforge.net/ 00:40:11.035 00:40:11.035 00:40:11.035 Suite: bdevio tests on: nvme3n1 00:40:11.035 Test: blockdev write read block ...passed 00:40:11.035 Test: blockdev write zeroes read block ...passed 00:40:11.035 Test: blockdev write zeroes read no split ...passed 00:40:11.035 Test: blockdev write zeroes read split ...passed 00:40:11.035 Test: blockdev write zeroes read split partial ...passed 00:40:11.035 Test: blockdev reset ...passed 00:40:11.035 Test: blockdev write read 8 blocks ...passed 00:40:11.035 Test: blockdev write read size > 128k ...passed 00:40:11.035 Test: blockdev write read invalid size ...passed 00:40:11.035 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:11.035 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:11.035 Test: blockdev write read max offset ...passed 00:40:11.035 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:11.036 Test: blockdev writev readv 8 blocks ...passed 00:40:11.036 Test: blockdev writev readv 30 x 1block ...passed 00:40:11.036 Test: blockdev writev readv block ...passed 00:40:11.036 Test: blockdev writev readv size > 128k ...passed 00:40:11.036 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:11.036 Test: blockdev comparev and writev ...passed 00:40:11.036 Test: blockdev nvme passthru rw ...passed 00:40:11.036 Test: blockdev nvme passthru vendor specific ...passed 00:40:11.036 Test: blockdev nvme admin passthru ...passed 00:40:11.036 Test: blockdev copy ...passed 00:40:11.036 Suite: bdevio tests on: nvme2n1 00:40:11.036 Test: blockdev write read block ...passed 00:40:11.036 Test: blockdev write zeroes read block ...passed 00:40:11.350 Test: blockdev write zeroes read no split ...passed 00:40:11.350 Test: blockdev write zeroes read split ...passed 00:40:11.350 Test: blockdev write zeroes read split partial ...passed 00:40:11.350 Test: blockdev reset ...passed 00:40:11.350 Test: blockdev write read 8 blocks ...passed 00:40:11.350 Test: blockdev write read size > 128k ...passed 00:40:11.350 Test: blockdev write read invalid size ...passed 00:40:11.350 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:11.350 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:11.350 Test: blockdev write read max offset ...passed 00:40:11.350 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:11.350 Test: blockdev writev readv 8 blocks ...passed 00:40:11.350 Test: blockdev writev readv 30 x 1block ...passed 00:40:11.350 Test: blockdev writev readv block ...passed 00:40:11.350 Test: blockdev writev readv size > 128k ...passed 00:40:11.350 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:11.350 Test: blockdev comparev and writev ...passed 00:40:11.350 Test: blockdev nvme passthru rw ...passed 00:40:11.350 Test: blockdev nvme passthru vendor specific ...passed 00:40:11.350 Test: blockdev nvme admin passthru ...passed 00:40:11.350 Test: blockdev copy ...passed 00:40:11.350 Suite: bdevio tests on: nvme1n1 00:40:11.350 Test: blockdev write read block ...passed 00:40:11.350 Test: blockdev write zeroes read block ...passed 00:40:11.350 Test: blockdev write zeroes read no split ...passed 00:40:11.350 Test: blockdev write zeroes read split ...passed 00:40:11.350 Test: blockdev write zeroes read split partial ...passed 00:40:11.350 Test: blockdev reset ...passed 00:40:11.350 Test: blockdev write read 8 blocks ...passed 00:40:11.350 Test: blockdev write read size > 128k ...passed 00:40:11.350 Test: blockdev write read invalid size ...passed 00:40:11.350 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:11.350 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:11.350 Test: blockdev write read max offset ...passed 00:40:11.350 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:11.350 Test: blockdev writev readv 8 blocks ...passed 00:40:11.350 Test: blockdev writev readv 30 x 1block ...passed 00:40:11.350 Test: blockdev writev readv block ...passed 00:40:11.350 Test: blockdev writev readv size > 128k ...passed 00:40:11.350 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:11.350 Test: blockdev comparev and writev ...passed 00:40:11.350 Test: blockdev nvme passthru rw ...passed 00:40:11.350 Test: blockdev nvme passthru vendor specific ...passed 00:40:11.350 Test: blockdev nvme admin passthru ...passed 00:40:11.350 Test: blockdev copy ...passed 00:40:11.350 Suite: bdevio tests on: nvme0n3 00:40:11.350 Test: blockdev write read block ...passed 00:40:11.350 Test: blockdev write zeroes read block ...passed 00:40:11.350 Test: blockdev write zeroes read no split ...passed 00:40:11.350 Test: blockdev write zeroes read split ...passed 00:40:11.350 Test: blockdev write zeroes read split partial ...passed 00:40:11.350 Test: blockdev reset ...passed 00:40:11.350 Test: blockdev write read 8 blocks ...passed 00:40:11.350 Test: blockdev write read size > 128k ...passed 00:40:11.350 Test: blockdev write read invalid size ...passed 00:40:11.350 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:11.350 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:11.611 Test: blockdev write read max offset ...passed 00:40:11.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:11.611 Test: blockdev writev readv 8 blocks ...passed 00:40:11.611 Test: blockdev writev readv 30 x 1block ...passed 00:40:11.611 Test: blockdev writev readv block ...passed 00:40:11.611 Test: blockdev writev readv size > 128k ...passed 00:40:11.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:11.611 Test: blockdev comparev and writev ...passed 00:40:11.611 Test: blockdev nvme passthru rw ...passed 00:40:11.611 Test: blockdev nvme passthru vendor specific ...passed 00:40:11.611 Test: blockdev nvme admin passthru ...passed 00:40:11.611 Test: blockdev copy ...passed 00:40:11.611 Suite: bdevio tests on: nvme0n2 00:40:11.611 Test: blockdev write read block ...passed 00:40:11.611 Test: blockdev write zeroes read block ...passed 00:40:11.611 Test: blockdev write zeroes read no split ...passed 00:40:11.611 Test: blockdev write zeroes read split ...passed 00:40:11.611 Test: blockdev write zeroes read split partial ...passed 00:40:11.611 Test: blockdev reset ...passed 00:40:11.611 Test: blockdev write read 8 blocks ...passed 00:40:11.611 Test: blockdev write read size > 128k ...passed 00:40:11.611 Test: blockdev write read invalid size ...passed 00:40:11.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:11.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:11.611 Test: blockdev write read max offset ...passed 00:40:11.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:11.611 Test: blockdev writev readv 8 blocks ...passed 00:40:11.611 Test: blockdev writev readv 30 x 1block ...passed 00:40:11.611 Test: blockdev writev readv block ...passed 00:40:11.611 Test: blockdev writev readv size > 128k ...passed 00:40:11.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:11.611 Test: blockdev comparev and writev ...passed 00:40:11.611 Test: blockdev nvme passthru rw ...passed 00:40:11.611 Test: blockdev nvme passthru vendor specific ...passed 00:40:11.611 Test: blockdev nvme admin passthru ...passed 00:40:11.611 Test: blockdev copy ...passed 00:40:11.611 Suite: bdevio tests on: nvme0n1 00:40:11.611 Test: blockdev write read block ...passed 00:40:11.611 Test: blockdev write zeroes read block ...passed 00:40:11.611 Test: blockdev write zeroes read no split ...passed 00:40:11.611 Test: blockdev write zeroes read split ...passed 00:40:11.611 Test: blockdev write zeroes read split partial ...passed 00:40:11.611 Test: blockdev reset ...passed 00:40:11.611 Test: blockdev write read 8 blocks ...passed 00:40:11.611 Test: blockdev write read size > 128k ...passed 00:40:11.611 Test: blockdev write read invalid size ...passed 00:40:11.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:11.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:11.611 Test: blockdev write read max offset ...passed 00:40:11.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:11.611 Test: blockdev writev readv 8 blocks ...passed 00:40:11.611 Test: blockdev writev readv 30 x 1block ...passed 00:40:11.611 Test: blockdev writev readv block ...passed 00:40:11.611 Test: blockdev writev readv size > 128k ...passed 00:40:11.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:11.611 Test: blockdev comparev and writev ...passed 00:40:11.611 Test: blockdev nvme passthru rw ...passed 00:40:11.611 Test: blockdev nvme passthru vendor specific ...passed 00:40:11.611 Test: blockdev nvme admin passthru ...passed 00:40:11.611 Test: blockdev copy ...passed 00:40:11.611 00:40:11.611 Run Summary: Type Total Ran Passed Failed Inactive 00:40:11.611 suites 6 6 n/a 0 0 00:40:11.611 tests 138 138 138 0 0 00:40:11.611 asserts 780 780 780 0 n/a 00:40:11.611 00:40:11.611 Elapsed time = 1.781 seconds 00:40:11.611 0 00:40:11.611 13:35:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74882 00:40:11.611 13:35:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74882 ']' 00:40:11.611 13:35:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74882 00:40:11.611 13:35:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:40:11.611 13:35:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:11.611 13:35:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74882 00:40:11.611 13:35:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:11.611 13:35:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:11.611 13:35:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74882' 00:40:11.611 killing process with pid 74882 00:40:11.611 13:35:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74882 00:40:11.611 13:35:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74882 00:40:13.007 13:35:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:40:13.007 00:40:13.007 real 0m3.202s 00:40:13.007 user 0m7.871s 00:40:13.007 sys 0m0.585s 00:40:13.007 13:35:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:13.007 13:35:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:40:13.007 ************************************ 00:40:13.007 END TEST bdev_bounds 00:40:13.007 ************************************ 00:40:13.007 13:35:06 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:40:13.007 13:35:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:40:13.007 13:35:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:13.007 13:35:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:13.007 ************************************ 00:40:13.007 START TEST bdev_nbd 00:40:13.007 ************************************ 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:40:13.007 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:40:13.008 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:40:13.008 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74948 00:40:13.008 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:40:13.008 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:40:13.008 13:35:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74948 /var/tmp/spdk-nbd.sock 00:40:13.008 13:35:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74948 ']' 00:40:13.008 13:35:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:40:13.008 13:35:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:13.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:40:13.008 13:35:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:40:13.008 13:35:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:13.008 13:35:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:13.267 [2024-12-06 13:35:06.217071] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:40:13.267 [2024-12-06 13:35:06.217264] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:13.526 [2024-12-06 13:35:06.417477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.526 [2024-12-06 13:35:06.569552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:14.093 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:40:14.094 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:14.353 1+0 records in 00:40:14.353 1+0 records out 00:40:14.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523266 s, 7.8 MB/s 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:14.353 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:14.612 1+0 records in 00:40:14.612 1+0 records out 00:40:14.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517925 s, 7.9 MB/s 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:40:14.612 13:35:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:15.180 1+0 records in 00:40:15.180 1+0 records out 00:40:15.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598124 s, 6.8 MB/s 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:40:15.180 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:15.439 1+0 records in 00:40:15.439 1+0 records out 00:40:15.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637562 s, 6.4 MB/s 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:40:15.439 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:15.698 1+0 records in 00:40:15.698 1+0 records out 00:40:15.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683361 s, 6.0 MB/s 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:40:15.698 13:35:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:40:15.957 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:40:15.957 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:40:15.957 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:40:15.957 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:40:15.957 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:15.957 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:15.957 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:15.958 1+0 records in 00:40:15.958 1+0 records out 00:40:15.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0009316 s, 4.4 MB/s 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:40:15.958 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd0", 00:40:16.526 "bdev_name": "nvme0n1" 00:40:16.526 }, 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd1", 00:40:16.526 "bdev_name": "nvme0n2" 00:40:16.526 }, 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd2", 00:40:16.526 "bdev_name": "nvme0n3" 00:40:16.526 }, 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd3", 00:40:16.526 "bdev_name": "nvme1n1" 00:40:16.526 }, 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd4", 00:40:16.526 "bdev_name": "nvme2n1" 00:40:16.526 }, 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd5", 00:40:16.526 "bdev_name": "nvme3n1" 00:40:16.526 } 00:40:16.526 ]' 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd0", 00:40:16.526 "bdev_name": "nvme0n1" 00:40:16.526 }, 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd1", 00:40:16.526 "bdev_name": "nvme0n2" 00:40:16.526 }, 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd2", 00:40:16.526 "bdev_name": "nvme0n3" 00:40:16.526 }, 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd3", 00:40:16.526 "bdev_name": "nvme1n1" 00:40:16.526 }, 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd4", 00:40:16.526 "bdev_name": "nvme2n1" 00:40:16.526 }, 00:40:16.526 { 00:40:16.526 "nbd_device": "/dev/nbd5", 00:40:16.526 "bdev_name": "nvme3n1" 00:40:16.526 } 00:40:16.526 ]' 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:16.526 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:16.784 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:16.784 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:16.784 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:16.784 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:16.784 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:16.784 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:16.784 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:16.784 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:16.784 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:40:17.042 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:17.042 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:17.042 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:17.042 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:17.042 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:17.042 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:17.042 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:17.042 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:17.042 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:17.042 13:35:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:40:17.301 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:40:17.301 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:40:17.301 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:40:17.301 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:17.301 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:17.301 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:40:17.301 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:17.301 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:17.301 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:17.301 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:40:17.560 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:40:17.560 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:40:17.560 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:40:17.560 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:17.560 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:17.560 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:40:17.560 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:17.560 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:17.560 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:17.560 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:40:17.819 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:40:17.819 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:40:17.819 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:40:17.819 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:17.819 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:17.819 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:40:17.819 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:17.819 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:17.819 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:17.819 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:40:18.077 13:35:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:40:18.077 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:40:18.077 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:40:18.077 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:18.077 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:18.077 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:40:18.077 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:18.077 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:18.077 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:18.077 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:18.077 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:40:18.336 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:40:18.638 /dev/nbd0 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:18.638 1+0 records in 00:40:18.638 1+0 records out 00:40:18.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575379 s, 7.1 MB/s 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:40:18.638 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:40:18.897 /dev/nbd1 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:18.897 1+0 records in 00:40:18.897 1+0 records out 00:40:18.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397074 s, 10.3 MB/s 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:40:18.897 13:35:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:40:19.155 /dev/nbd10 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:19.155 1+0 records in 00:40:19.155 1+0 records out 00:40:19.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557175 s, 7.4 MB/s 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:19.155 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:19.156 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:19.156 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:19.156 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:40:19.156 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:40:19.414 /dev/nbd11 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:19.414 1+0 records in 00:40:19.414 1+0 records out 00:40:19.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642195 s, 6.4 MB/s 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:40:19.414 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:40:19.673 /dev/nbd12 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:19.673 1+0 records in 00:40:19.673 1+0 records out 00:40:19.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710522 s, 5.8 MB/s 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:40:19.673 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:40:19.932 /dev/nbd13 00:40:19.932 13:35:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:40:19.932 13:35:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:40:19.932 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:40:19.932 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:40:19.932 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:19.932 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:19.932 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:40:19.932 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:40:19.932 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:19.932 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:19.932 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:20.872 1+0 records in 00:40:20.872 1+0 records out 00:40:20.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.891191 s, 4.6 kB/s 00:40:20.872 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:20.872 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:40:20.872 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:20.872 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:20.872 13:35:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:40:20.872 13:35:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:20.872 13:35:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:40:20.872 13:35:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:20.872 13:35:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:20.872 13:35:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:21.130 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:40:21.130 { 00:40:21.130 "nbd_device": "/dev/nbd0", 00:40:21.130 "bdev_name": "nvme0n1" 00:40:21.130 }, 00:40:21.130 { 00:40:21.130 "nbd_device": "/dev/nbd1", 00:40:21.130 "bdev_name": "nvme0n2" 00:40:21.130 }, 00:40:21.130 { 00:40:21.130 "nbd_device": "/dev/nbd10", 00:40:21.130 "bdev_name": "nvme0n3" 00:40:21.130 }, 00:40:21.130 { 00:40:21.130 "nbd_device": "/dev/nbd11", 00:40:21.130 "bdev_name": "nvme1n1" 00:40:21.130 }, 00:40:21.130 { 00:40:21.130 "nbd_device": "/dev/nbd12", 00:40:21.130 "bdev_name": "nvme2n1" 00:40:21.130 }, 00:40:21.130 { 00:40:21.130 "nbd_device": "/dev/nbd13", 00:40:21.130 "bdev_name": "nvme3n1" 00:40:21.130 } 00:40:21.130 ]' 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:40:21.389 { 00:40:21.389 "nbd_device": "/dev/nbd0", 00:40:21.389 "bdev_name": "nvme0n1" 00:40:21.389 }, 00:40:21.389 { 00:40:21.389 "nbd_device": "/dev/nbd1", 00:40:21.389 "bdev_name": "nvme0n2" 00:40:21.389 }, 00:40:21.389 { 00:40:21.389 "nbd_device": "/dev/nbd10", 00:40:21.389 "bdev_name": "nvme0n3" 00:40:21.389 }, 00:40:21.389 { 00:40:21.389 "nbd_device": "/dev/nbd11", 00:40:21.389 "bdev_name": "nvme1n1" 00:40:21.389 }, 00:40:21.389 { 00:40:21.389 "nbd_device": "/dev/nbd12", 00:40:21.389 "bdev_name": "nvme2n1" 00:40:21.389 }, 00:40:21.389 { 00:40:21.389 "nbd_device": "/dev/nbd13", 00:40:21.389 "bdev_name": "nvme3n1" 00:40:21.389 } 00:40:21.389 ]' 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:40:21.389 /dev/nbd1 00:40:21.389 /dev/nbd10 00:40:21.389 /dev/nbd11 00:40:21.389 /dev/nbd12 00:40:21.389 /dev/nbd13' 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:40:21.389 /dev/nbd1 00:40:21.389 /dev/nbd10 00:40:21.389 /dev/nbd11 00:40:21.389 /dev/nbd12 00:40:21.389 /dev/nbd13' 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:40:21.389 256+0 records in 00:40:21.389 256+0 records out 00:40:21.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111694 s, 93.9 MB/s 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:40:21.389 256+0 records in 00:40:21.389 256+0 records out 00:40:21.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119495 s, 8.8 MB/s 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:21.389 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:40:21.648 256+0 records in 00:40:21.648 256+0 records out 00:40:21.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128585 s, 8.2 MB/s 00:40:21.648 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:21.648 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:40:21.648 256+0 records in 00:40:21.648 256+0 records out 00:40:21.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140542 s, 7.5 MB/s 00:40:21.648 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:21.648 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:40:21.906 256+0 records in 00:40:21.906 256+0 records out 00:40:21.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131643 s, 8.0 MB/s 00:40:21.906 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:21.906 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:40:21.906 256+0 records in 00:40:21.906 256+0 records out 00:40:21.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123422 s, 8.5 MB/s 00:40:21.906 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:21.906 13:35:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:40:22.165 256+0 records in 00:40:22.165 256+0 records out 00:40:22.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142398 s, 7.4 MB/s 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:22.165 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:22.423 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:22.423 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:22.423 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:22.423 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:22.423 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:22.423 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:22.423 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:22.423 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:22.423 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:22.423 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:40:22.680 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:22.680 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:22.680 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:22.680 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:22.680 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:22.680 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:22.680 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:22.680 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:22.680 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:22.680 13:35:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:23.245 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:40:23.503 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:40:23.503 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:40:23.503 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:40:23.503 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:23.503 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:23.503 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:40:23.503 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:23.503 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:23.503 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:23.503 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:40:23.760 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:40:23.760 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:40:23.760 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:40:23.760 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:23.760 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:23.760 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:40:23.760 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:23.760 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:23.760 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:23.760 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:23.760 13:35:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:40:24.327 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:40:24.585 malloc_lvol_verify 00:40:24.585 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:40:24.844 b98faff5-e4cc-4bb8-9631-a19fc2cc4d9f 00:40:24.844 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:40:25.102 6ed0cb2b-5fd7-45c0-b1ec-446e7cacaa45 00:40:25.102 13:35:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:40:25.102 /dev/nbd0 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:40:25.362 mke2fs 1.47.0 (5-Feb-2023) 00:40:25.362 Discarding device blocks: 0/4096 done 00:40:25.362 Creating filesystem with 4096 1k blocks and 1024 inodes 00:40:25.362 00:40:25.362 Allocating group tables: 0/1 done 00:40:25.362 Writing inode tables: 0/1 done 00:40:25.362 Creating journal (1024 blocks): done 00:40:25.362 Writing superblocks and filesystem accounting information: 0/1 done 00:40:25.362 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:25.362 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74948 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74948 ']' 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74948 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74948 00:40:25.621 killing process with pid 74948 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74948' 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74948 00:40:25.621 13:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74948 00:40:27.023 13:35:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:40:27.023 00:40:27.023 real 0m13.876s 00:40:27.023 user 0m17.523s 00:40:27.023 sys 0m5.890s 00:40:27.023 13:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.023 13:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:27.023 ************************************ 00:40:27.023 END TEST bdev_nbd 00:40:27.023 ************************************ 00:40:27.023 13:35:19 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:40:27.023 13:35:19 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:40:27.023 13:35:19 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:40:27.023 13:35:19 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:40:27.023 13:35:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:27.023 13:35:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:27.023 13:35:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:27.023 ************************************ 00:40:27.023 START TEST bdev_fio 00:40:27.023 ************************************ 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:40:27.023 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:40:27.023 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:27.024 ************************************ 00:40:27.024 START TEST bdev_fio_rw_verify 00:40:27.024 ************************************ 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:40:27.024 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:27.282 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:27.282 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:27.282 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:40:27.282 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:40:27.282 13:35:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:40:27.540 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:27.540 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:27.540 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:27.540 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:27.540 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:27.540 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:27.540 fio-3.35 00:40:27.540 Starting 6 threads 00:40:39.734 00:40:39.734 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75392: Fri Dec 6 13:35:31 2024 00:40:39.734 read: IOPS=29.8k, BW=117MiB/s (122MB/s)(1165MiB/10001msec) 00:40:39.734 slat (usec): min=2, max=960, avg= 6.95, stdev= 5.14 00:40:39.734 clat (usec): min=97, max=4664, avg=624.58, stdev=236.28 00:40:39.734 lat (usec): min=106, max=4686, avg=631.53, stdev=237.06 00:40:39.734 clat percentiles (usec): 00:40:39.734 | 50.000th=[ 635], 99.000th=[ 1205], 99.900th=[ 2024], 99.990th=[ 3720], 00:40:39.734 | 99.999th=[ 4621] 00:40:39.734 write: IOPS=30.2k, BW=118MiB/s (124MB/s)(1178MiB/10001msec); 0 zone resets 00:40:39.734 slat (usec): min=8, max=1544, avg=25.75, stdev=28.29 00:40:39.734 clat (usec): min=90, max=5925, avg=710.80, stdev=255.58 00:40:39.734 lat (usec): min=110, max=5948, avg=736.54, stdev=258.63 00:40:39.734 clat percentiles (usec): 00:40:39.734 | 50.000th=[ 709], 99.000th=[ 1418], 99.900th=[ 2442], 99.990th=[ 4293], 00:40:39.734 | 99.999th=[ 5866] 00:40:39.734 bw ( KiB/s): min=98616, max=148552, per=99.64%, avg=120167.58, stdev=2716.01, samples=114 00:40:39.734 iops : min=24654, max=37138, avg=30041.68, stdev=678.99, samples=114 00:40:39.734 lat (usec) : 100=0.01%, 250=3.26%, 500=21.70%, 750=38.95%, 1000=29.51% 00:40:39.734 lat (msec) : 2=6.42%, 4=0.15%, 10=0.01% 00:40:39.734 cpu : usr=59.13%, sys=26.66%, ctx=7414, majf=0, minf=25263 00:40:39.734 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:39.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:39.735 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:39.735 issued rwts: total=298355,301545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:39.735 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:39.735 00:40:39.735 Run status group 0 (all jobs): 00:40:39.735 READ: bw=117MiB/s (122MB/s), 117MiB/s-117MiB/s (122MB/s-122MB/s), io=1165MiB (1222MB), run=10001-10001msec 00:40:39.735 WRITE: bw=118MiB/s (124MB/s), 118MiB/s-118MiB/s (124MB/s-124MB/s), io=1178MiB (1235MB), run=10001-10001msec 00:40:39.993 ----------------------------------------------------- 00:40:39.993 Suppressions used: 00:40:39.993 count bytes template 00:40:39.993 6 48 /usr/src/fio/parse.c 00:40:39.993 2992 287232 /usr/src/fio/iolog.c 00:40:39.993 1 8 libtcmalloc_minimal.so 00:40:39.993 1 904 libcrypto.so 00:40:39.993 ----------------------------------------------------- 00:40:39.993 00:40:39.993 00:40:39.993 real 0m12.961s 00:40:39.993 user 0m37.753s 00:40:39.993 sys 0m16.578s 00:40:39.993 13:35:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:39.993 13:35:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:40:39.993 ************************************ 00:40:39.993 END TEST bdev_fio_rw_verify 00:40:39.993 ************************************ 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7168a055-0cf5-4284-9f38-8a133b5104f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7168a055-0cf5-4284-9f38-8a133b5104f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "6fe39b50-0eb7-437d-b574-d10d85478e29"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6fe39b50-0eb7-437d-b574-d10d85478e29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "859a0b7f-c9e9-48cb-8d80-5bd3f64a1602"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "859a0b7f-c9e9-48cb-8d80-5bd3f64a1602",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "adeefa6e-d0f6-4d55-8bde-d323f4e1e18d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "adeefa6e-d0f6-4d55-8bde-d323f4e1e18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "5622778c-e430-42eb-b3fd-ccbc22be5d5c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5622778c-e430-42eb-b3fd-ccbc22be5d5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "004fba90-28b9-43d7-96af-5d99db82a39c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "004fba90-28b9-43d7-96af-5d99db82a39c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:40:40.253 /home/vagrant/spdk_repo/spdk 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:40:40.253 00:40:40.253 real 0m13.162s 00:40:40.253 user 0m37.857s 00:40:40.253 sys 0m16.682s 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:40.253 13:35:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:40.253 ************************************ 00:40:40.253 END TEST bdev_fio 00:40:40.253 ************************************ 00:40:40.253 13:35:33 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:40.253 13:35:33 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:40.253 13:35:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:40:40.253 13:35:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:40.253 13:35:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:40.253 ************************************ 00:40:40.253 START TEST bdev_verify 00:40:40.253 ************************************ 00:40:40.253 13:35:33 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:40.253 [2024-12-06 13:35:33.330619] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:40:40.253 [2024-12-06 13:35:33.331541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75562 ] 00:40:40.517 [2024-12-06 13:35:33.523918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:40.783 [2024-12-06 13:35:33.729010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.783 [2024-12-06 13:35:33.729041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:41.350 Running I/O for 5 seconds... 00:40:43.658 22304.00 IOPS, 87.12 MiB/s [2024-12-06T13:35:37.694Z] 22033.50 IOPS, 86.07 MiB/s [2024-12-06T13:35:39.068Z] 22016.00 IOPS, 86.00 MiB/s [2024-12-06T13:35:39.635Z] 21784.75 IOPS, 85.10 MiB/s 00:40:46.535 Latency(us) 00:40:46.535 [2024-12-06T13:35:39.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:46.535 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0x0 length 0x80000 00:40:46.535 nvme0n1 : 5.04 1600.53 6.25 0.00 0.00 79837.78 17850.76 72901.00 00:40:46.535 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0x80000 length 0x80000 00:40:46.535 nvme0n1 : 5.04 1626.02 6.35 0.00 0.00 78586.04 7801.90 90377.26 00:40:46.535 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0x0 length 0x80000 00:40:46.535 nvme0n2 : 5.06 1618.03 6.32 0.00 0.00 78849.39 4181.82 79891.50 00:40:46.535 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0x80000 length 0x80000 00:40:46.535 nvme0n2 : 5.06 1619.19 6.32 0.00 0.00 78814.83 8675.72 81389.47 00:40:46.535 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0x0 length 0x80000 00:40:46.535 nvme0n3 : 5.04 1599.18 6.25 0.00 0.00 79665.61 16103.13 70404.39 00:40:46.535 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0x80000 length 0x80000 00:40:46.535 nvme0n3 : 5.06 1618.69 6.32 0.00 0.00 78718.74 12545.46 72901.00 00:40:46.535 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0x0 length 0x20000 00:40:46.535 nvme1n1 : 5.05 1596.66 6.24 0.00 0.00 79677.05 14480.34 81389.47 00:40:46.535 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0x20000 length 0x20000 00:40:46.535 nvme1n1 : 5.06 1618.07 6.32 0.00 0.00 78630.71 17351.44 71403.03 00:40:46.535 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0x0 length 0xa0000 00:40:46.535 nvme2n1 : 5.07 1565.29 6.11 0.00 0.00 81151.87 11047.50 89877.94 00:40:46.535 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0xa0000 length 0xa0000 00:40:46.535 nvme2n1 : 5.04 1446.41 5.65 0.00 0.00 87828.04 10610.59 121834.54 00:40:46.535 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0x0 length 0xbd0bd 00:40:46.535 nvme3n1 : 5.07 2510.41 9.81 0.00 0.00 50386.03 5242.88 64911.85 00:40:46.535 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:46.535 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:40:46.535 nvme3n1 : 5.07 2607.15 10.18 0.00 0.00 48616.92 3791.73 74898.29 00:40:46.535 [2024-12-06T13:35:39.635Z] =================================================================================================================== 00:40:46.535 [2024-12-06T13:35:39.635Z] Total : 21025.61 82.13 0.00 0.00 72619.81 3791.73 121834.54 00:40:47.906 00:40:47.906 real 0m7.569s 00:40:47.906 user 0m11.937s 00:40:47.906 sys 0m1.881s 00:40:47.906 13:35:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:47.906 13:35:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:40:47.906 ************************************ 00:40:47.906 END TEST bdev_verify 00:40:47.906 ************************************ 00:40:47.906 13:35:40 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:47.906 13:35:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:40:47.906 13:35:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:47.906 13:35:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:47.906 ************************************ 00:40:47.906 START TEST bdev_verify_big_io 00:40:47.906 ************************************ 00:40:47.906 13:35:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:47.907 [2024-12-06 13:35:40.967755] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:40:47.907 [2024-12-06 13:35:40.967925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75668 ] 00:40:48.165 [2024-12-06 13:35:41.149032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:48.423 [2024-12-06 13:35:41.308063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.423 [2024-12-06 13:35:41.308096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:48.986 Running I/O for 5 seconds... 00:40:54.804 2261.00 IOPS, 141.31 MiB/s [2024-12-06T13:35:47.904Z] 3270.50 IOPS, 204.41 MiB/s 00:40:54.804 Latency(us) 00:40:54.804 [2024-12-06T13:35:47.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:54.804 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0x0 length 0x8000 00:40:54.804 nvme0n1 : 5.72 145.57 9.10 0.00 0.00 839338.07 18225.25 910763.15 00:40:54.804 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0x8000 length 0x8000 00:40:54.804 nvme0n1 : 5.76 109.75 6.86 0.00 0.00 1098986.97 77894.22 2077179.12 00:40:54.804 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0x0 length 0x8000 00:40:54.804 nvme0n2 : 5.73 128.49 8.03 0.00 0.00 935347.97 29584.82 1549895.19 00:40:54.804 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0x8000 length 0x8000 00:40:54.804 nvme0n2 : 5.76 133.31 8.33 0.00 0.00 876483.94 16976.94 822882.50 00:40:54.804 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0x0 length 0x8000 00:40:54.804 nvme0n3 : 5.73 164.71 10.29 0.00 0.00 729255.15 10985.08 1254296.62 00:40:54.804 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0x8000 length 0x8000 00:40:54.804 nvme0n3 : 5.78 156.48 9.78 0.00 0.00 766353.07 73899.64 1278264.08 00:40:54.804 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0x0 length 0x2000 00:40:54.804 nvme1n1 : 5.74 131.04 8.19 0.00 0.00 889648.05 17725.93 1989298.47 00:40:54.804 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0x2000 length 0x2000 00:40:54.804 nvme1n1 : 5.77 155.40 9.71 0.00 0.00 758621.83 48434.22 890790.28 00:40:54.804 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0x0 length 0xa000 00:40:54.804 nvme2n1 : 5.73 145.07 9.07 0.00 0.00 785059.61 12795.12 1390112.18 00:40:54.804 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0xa000 length 0xa000 00:40:54.804 nvme2n1 : 5.79 138.26 8.64 0.00 0.00 834977.09 6553.60 1845493.76 00:40:54.804 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0x0 length 0xbd0b 00:40:54.804 nvme3n1 : 5.74 147.21 9.20 0.00 0.00 755066.39 11858.90 814893.35 00:40:54.804 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:54.804 Verification LBA range: start 0xbd0b length 0xbd0b 00:40:54.804 nvme3n1 : 5.77 166.24 10.39 0.00 0.00 679297.80 16352.79 830871.65 00:40:54.804 [2024-12-06T13:35:47.904Z] =================================================================================================================== 00:40:54.804 [2024-12-06T13:35:47.904Z] Total : 1721.55 107.60 0.00 0.00 817664.24 6553.60 2077179.12 00:40:56.707 00:40:56.707 real 0m8.560s 00:40:56.707 user 0m15.396s 00:40:56.707 sys 0m0.696s 00:40:56.707 13:35:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:56.707 13:35:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:40:56.707 ************************************ 00:40:56.707 END TEST bdev_verify_big_io 00:40:56.707 ************************************ 00:40:56.707 13:35:49 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:56.707 13:35:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:40:56.707 13:35:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:56.707 13:35:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:56.707 ************************************ 00:40:56.707 START TEST bdev_write_zeroes 00:40:56.707 ************************************ 00:40:56.707 13:35:49 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:56.707 [2024-12-06 13:35:49.623777] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:40:56.707 [2024-12-06 13:35:49.623925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75782 ] 00:40:56.707 [2024-12-06 13:35:49.801115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:56.965 [2024-12-06 13:35:49.950039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:57.530 Running I/O for 1 seconds... 00:40:58.468 74463.00 IOPS, 290.87 MiB/s 00:40:58.468 Latency(us) 00:40:58.468 [2024-12-06T13:35:51.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:58.468 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:58.468 nvme0n1 : 1.01 11735.84 45.84 0.00 0.00 10896.31 6116.69 21346.01 00:40:58.468 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:58.468 nvme0n2 : 1.02 11724.83 45.80 0.00 0.00 10899.56 6459.98 20971.52 00:40:58.468 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:58.468 nvme0n3 : 1.02 11712.37 45.75 0.00 0.00 10902.56 6366.35 20597.03 00:40:58.468 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:58.468 nvme1n1 : 1.02 11701.77 45.71 0.00 0.00 10904.89 6303.94 20222.54 00:40:58.468 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:58.468 nvme2n1 : 1.02 11691.00 45.67 0.00 0.00 10907.58 6397.56 20721.86 00:40:58.468 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:58.468 nvme3n1 : 1.02 15477.08 60.46 0.00 0.00 8230.98 2402.99 22219.82 00:40:58.468 [2024-12-06T13:35:51.568Z] =================================================================================================================== 00:40:58.468 [2024-12-06T13:35:51.568Z] Total : 74042.90 289.23 0.00 0.00 10340.75 2402.99 22219.82 00:40:59.880 00:40:59.880 real 0m3.343s 00:40:59.880 user 0m2.376s 00:40:59.880 sys 0m0.786s 00:40:59.880 13:35:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:59.880 13:35:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:40:59.880 ************************************ 00:40:59.880 END TEST bdev_write_zeroes 00:40:59.880 ************************************ 00:40:59.880 13:35:52 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:59.880 13:35:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:40:59.880 13:35:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:59.880 13:35:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:40:59.880 ************************************ 00:40:59.880 START TEST bdev_json_nonenclosed 00:40:59.880 ************************************ 00:40:59.880 13:35:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:00.138 [2024-12-06 13:35:53.032296] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:41:00.138 [2024-12-06 13:35:53.032512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75839 ] 00:41:00.138 [2024-12-06 13:35:53.231323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:00.397 [2024-12-06 13:35:53.380758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:00.397 [2024-12-06 13:35:53.380900] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:41:00.397 [2024-12-06 13:35:53.380926] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:00.397 [2024-12-06 13:35:53.380940] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:00.655 ************************************ 00:41:00.655 END TEST bdev_json_nonenclosed 00:41:00.655 00:41:00.655 real 0m0.777s 00:41:00.655 user 0m0.467s 00:41:00.655 sys 0m0.204s 00:41:00.655 13:35:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:00.655 13:35:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:41:00.655 ************************************ 00:41:00.655 13:35:53 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:00.655 13:35:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:41:00.655 13:35:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:00.655 13:35:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:00.655 ************************************ 00:41:00.655 START TEST bdev_json_nonarray 00:41:00.655 ************************************ 00:41:00.655 13:35:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:00.913 [2024-12-06 13:35:53.865975] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:41:00.913 [2024-12-06 13:35:53.866181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75870 ] 00:41:01.171 [2024-12-06 13:35:54.058146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:01.171 [2024-12-06 13:35:54.208121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:01.171 [2024-12-06 13:35:54.208248] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:41:01.171 [2024-12-06 13:35:54.208276] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:01.171 [2024-12-06 13:35:54.208291] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:01.429 00:41:01.429 real 0m0.765s 00:41:01.429 user 0m0.470s 00:41:01.429 sys 0m0.189s 00:41:01.429 ************************************ 00:41:01.429 END TEST bdev_json_nonarray 00:41:01.429 ************************************ 00:41:01.429 13:35:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:01.429 13:35:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:41:01.686 13:35:54 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:02.251 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:02.817 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:41:02.817 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:41:02.817 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:41:03.075 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:41:03.075 00:41:03.075 real 1m1.849s 00:41:03.075 user 1m42.885s 00:41:03.075 sys 0m30.415s 00:41:03.075 13:35:56 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:03.075 ************************************ 00:41:03.075 END TEST blockdev_xnvme 00:41:03.075 ************************************ 00:41:03.075 13:35:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:03.333 13:35:56 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:41:03.333 13:35:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:03.333 13:35:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:03.333 13:35:56 -- common/autotest_common.sh@10 -- # set +x 00:41:03.333 ************************************ 00:41:03.333 START TEST ublk 00:41:03.333 ************************************ 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:41:03.333 * Looking for test storage... 00:41:03.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:03.333 13:35:56 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:03.333 13:35:56 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:03.333 13:35:56 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:03.333 13:35:56 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:41:03.333 13:35:56 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:41:03.333 13:35:56 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:41:03.333 13:35:56 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:41:03.333 13:35:56 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:41:03.333 13:35:56 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:41:03.333 13:35:56 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:41:03.333 13:35:56 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:03.333 13:35:56 ublk -- scripts/common.sh@344 -- # case "$op" in 00:41:03.333 13:35:56 ublk -- scripts/common.sh@345 -- # : 1 00:41:03.333 13:35:56 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:03.333 13:35:56 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:03.333 13:35:56 ublk -- scripts/common.sh@365 -- # decimal 1 00:41:03.333 13:35:56 ublk -- scripts/common.sh@353 -- # local d=1 00:41:03.333 13:35:56 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:03.333 13:35:56 ublk -- scripts/common.sh@355 -- # echo 1 00:41:03.333 13:35:56 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:41:03.333 13:35:56 ublk -- scripts/common.sh@366 -- # decimal 2 00:41:03.333 13:35:56 ublk -- scripts/common.sh@353 -- # local d=2 00:41:03.333 13:35:56 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:03.333 13:35:56 ublk -- scripts/common.sh@355 -- # echo 2 00:41:03.333 13:35:56 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:41:03.333 13:35:56 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:03.333 13:35:56 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:03.333 13:35:56 ublk -- scripts/common.sh@368 -- # return 0 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:03.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.333 --rc genhtml_branch_coverage=1 00:41:03.333 --rc genhtml_function_coverage=1 00:41:03.333 --rc genhtml_legend=1 00:41:03.333 --rc geninfo_all_blocks=1 00:41:03.333 --rc geninfo_unexecuted_blocks=1 00:41:03.333 00:41:03.333 ' 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:03.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.333 --rc genhtml_branch_coverage=1 00:41:03.333 --rc genhtml_function_coverage=1 00:41:03.333 --rc genhtml_legend=1 00:41:03.333 --rc geninfo_all_blocks=1 00:41:03.333 --rc geninfo_unexecuted_blocks=1 00:41:03.333 00:41:03.333 ' 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:03.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.333 --rc genhtml_branch_coverage=1 00:41:03.333 --rc genhtml_function_coverage=1 00:41:03.333 --rc genhtml_legend=1 00:41:03.333 --rc geninfo_all_blocks=1 00:41:03.333 --rc geninfo_unexecuted_blocks=1 00:41:03.333 00:41:03.333 ' 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:03.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.333 --rc genhtml_branch_coverage=1 00:41:03.333 --rc genhtml_function_coverage=1 00:41:03.333 --rc genhtml_legend=1 00:41:03.333 --rc geninfo_all_blocks=1 00:41:03.333 --rc geninfo_unexecuted_blocks=1 00:41:03.333 00:41:03.333 ' 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:41:03.333 13:35:56 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:41:03.333 13:35:56 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:41:03.333 13:35:56 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:41:03.333 13:35:56 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:41:03.333 13:35:56 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:41:03.333 13:35:56 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:41:03.333 13:35:56 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:41:03.333 13:35:56 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:41:03.333 13:35:56 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:03.333 13:35:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:41:03.591 ************************************ 00:41:03.591 START TEST test_save_ublk_config 00:41:03.591 ************************************ 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76161 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76161 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76161 ']' 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:03.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:03.591 13:35:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:41:03.591 [2024-12-06 13:35:56.589967] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:41:03.591 [2024-12-06 13:35:56.590167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76161 ] 00:41:03.850 [2024-12-06 13:35:56.799463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.123 [2024-12-06 13:35:57.000609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:05.055 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:05.055 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:41:05.055 13:35:58 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:41:05.055 13:35:58 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:41:05.055 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.055 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:41:05.055 [2024-12-06 13:35:58.101439] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:41:05.055 [2024-12-06 13:35:58.102874] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:05.313 malloc0 00:41:05.313 [2024-12-06 13:35:58.204578] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:41:05.313 [2024-12-06 13:35:58.204694] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:41:05.313 [2024-12-06 13:35:58.204709] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:41:05.313 [2024-12-06 13:35:58.204720] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:41:05.313 [2024-12-06 13:35:58.210245] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:05.313 [2024-12-06 13:35:58.210269] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:05.313 [2024-12-06 13:35:58.219434] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:05.313 [2024-12-06 13:35:58.219566] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:41:05.313 [2024-12-06 13:35:58.243441] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:41:05.313 0 00:41:05.313 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.313 13:35:58 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:41:05.313 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.313 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:41:05.570 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.570 13:35:58 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:41:05.570 "subsystems": [ 00:41:05.570 { 00:41:05.570 "subsystem": "fsdev", 00:41:05.570 "config": [ 00:41:05.570 { 00:41:05.570 "method": "fsdev_set_opts", 00:41:05.570 "params": { 00:41:05.570 "fsdev_io_pool_size": 65535, 00:41:05.570 "fsdev_io_cache_size": 256 00:41:05.570 } 00:41:05.570 } 00:41:05.570 ] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "keyring", 00:41:05.570 "config": [] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "iobuf", 00:41:05.570 "config": [ 00:41:05.570 { 00:41:05.570 "method": "iobuf_set_options", 00:41:05.570 "params": { 00:41:05.570 "small_pool_count": 8192, 00:41:05.570 "large_pool_count": 1024, 00:41:05.570 "small_bufsize": 8192, 00:41:05.570 "large_bufsize": 135168, 00:41:05.570 "enable_numa": false 00:41:05.570 } 00:41:05.570 } 00:41:05.570 ] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "sock", 00:41:05.570 "config": [ 00:41:05.570 { 00:41:05.570 "method": "sock_set_default_impl", 00:41:05.570 "params": { 00:41:05.570 "impl_name": "posix" 00:41:05.570 } 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "method": "sock_impl_set_options", 00:41:05.570 "params": { 00:41:05.570 "impl_name": "ssl", 00:41:05.570 "recv_buf_size": 4096, 00:41:05.570 "send_buf_size": 4096, 00:41:05.570 "enable_recv_pipe": true, 00:41:05.570 "enable_quickack": false, 00:41:05.570 "enable_placement_id": 0, 00:41:05.570 "enable_zerocopy_send_server": true, 00:41:05.570 "enable_zerocopy_send_client": false, 00:41:05.570 "zerocopy_threshold": 0, 00:41:05.570 "tls_version": 0, 00:41:05.570 "enable_ktls": false 00:41:05.570 } 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "method": "sock_impl_set_options", 00:41:05.570 "params": { 00:41:05.570 "impl_name": "posix", 00:41:05.570 "recv_buf_size": 2097152, 00:41:05.570 "send_buf_size": 2097152, 00:41:05.570 "enable_recv_pipe": true, 00:41:05.570 "enable_quickack": false, 00:41:05.570 "enable_placement_id": 0, 00:41:05.570 "enable_zerocopy_send_server": true, 00:41:05.570 "enable_zerocopy_send_client": false, 00:41:05.570 "zerocopy_threshold": 0, 00:41:05.570 "tls_version": 0, 00:41:05.570 "enable_ktls": false 00:41:05.570 } 00:41:05.570 } 00:41:05.570 ] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "vmd", 00:41:05.570 "config": [] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "accel", 00:41:05.570 "config": [ 00:41:05.570 { 00:41:05.570 "method": "accel_set_options", 00:41:05.570 "params": { 00:41:05.570 "small_cache_size": 128, 00:41:05.570 "large_cache_size": 16, 00:41:05.570 "task_count": 2048, 00:41:05.570 "sequence_count": 2048, 00:41:05.570 "buf_count": 2048 00:41:05.570 } 00:41:05.570 } 00:41:05.570 ] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "bdev", 00:41:05.570 "config": [ 00:41:05.570 { 00:41:05.570 "method": "bdev_set_options", 00:41:05.570 "params": { 00:41:05.570 "bdev_io_pool_size": 65535, 00:41:05.570 "bdev_io_cache_size": 256, 00:41:05.570 "bdev_auto_examine": true, 00:41:05.570 "iobuf_small_cache_size": 128, 00:41:05.570 "iobuf_large_cache_size": 16 00:41:05.570 } 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "method": "bdev_raid_set_options", 00:41:05.570 "params": { 00:41:05.570 "process_window_size_kb": 1024, 00:41:05.570 "process_max_bandwidth_mb_sec": 0 00:41:05.570 } 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "method": "bdev_iscsi_set_options", 00:41:05.570 "params": { 00:41:05.570 "timeout_sec": 30 00:41:05.570 } 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "method": "bdev_nvme_set_options", 00:41:05.570 "params": { 00:41:05.570 "action_on_timeout": "none", 00:41:05.570 "timeout_us": 0, 00:41:05.570 "timeout_admin_us": 0, 00:41:05.570 "keep_alive_timeout_ms": 10000, 00:41:05.570 "arbitration_burst": 0, 00:41:05.570 "low_priority_weight": 0, 00:41:05.570 "medium_priority_weight": 0, 00:41:05.570 "high_priority_weight": 0, 00:41:05.570 "nvme_adminq_poll_period_us": 10000, 00:41:05.570 "nvme_ioq_poll_period_us": 0, 00:41:05.570 "io_queue_requests": 0, 00:41:05.570 "delay_cmd_submit": true, 00:41:05.570 "transport_retry_count": 4, 00:41:05.570 "bdev_retry_count": 3, 00:41:05.570 "transport_ack_timeout": 0, 00:41:05.570 "ctrlr_loss_timeout_sec": 0, 00:41:05.570 "reconnect_delay_sec": 0, 00:41:05.570 "fast_io_fail_timeout_sec": 0, 00:41:05.570 "disable_auto_failback": false, 00:41:05.570 "generate_uuids": false, 00:41:05.570 "transport_tos": 0, 00:41:05.570 "nvme_error_stat": false, 00:41:05.570 "rdma_srq_size": 0, 00:41:05.570 "io_path_stat": false, 00:41:05.570 "allow_accel_sequence": false, 00:41:05.570 "rdma_max_cq_size": 0, 00:41:05.570 "rdma_cm_event_timeout_ms": 0, 00:41:05.570 "dhchap_digests": [ 00:41:05.570 "sha256", 00:41:05.570 "sha384", 00:41:05.570 "sha512" 00:41:05.570 ], 00:41:05.570 "dhchap_dhgroups": [ 00:41:05.570 "null", 00:41:05.570 "ffdhe2048", 00:41:05.570 "ffdhe3072", 00:41:05.570 "ffdhe4096", 00:41:05.570 "ffdhe6144", 00:41:05.570 "ffdhe8192" 00:41:05.570 ] 00:41:05.570 } 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "method": "bdev_nvme_set_hotplug", 00:41:05.570 "params": { 00:41:05.570 "period_us": 100000, 00:41:05.570 "enable": false 00:41:05.570 } 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "method": "bdev_malloc_create", 00:41:05.570 "params": { 00:41:05.570 "name": "malloc0", 00:41:05.570 "num_blocks": 8192, 00:41:05.570 "block_size": 4096, 00:41:05.570 "physical_block_size": 4096, 00:41:05.570 "uuid": "ae1d533a-2e89-41a9-94fb-fde30ad8380b", 00:41:05.570 "optimal_io_boundary": 0, 00:41:05.570 "md_size": 0, 00:41:05.570 "dif_type": 0, 00:41:05.570 "dif_is_head_of_md": false, 00:41:05.570 "dif_pi_format": 0 00:41:05.570 } 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "method": "bdev_wait_for_examine" 00:41:05.570 } 00:41:05.570 ] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "scsi", 00:41:05.570 "config": null 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "scheduler", 00:41:05.570 "config": [ 00:41:05.570 { 00:41:05.570 "method": "framework_set_scheduler", 00:41:05.570 "params": { 00:41:05.570 "name": "static" 00:41:05.570 } 00:41:05.570 } 00:41:05.570 ] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "vhost_scsi", 00:41:05.570 "config": [] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "vhost_blk", 00:41:05.570 "config": [] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "ublk", 00:41:05.570 "config": [ 00:41:05.570 { 00:41:05.570 "method": "ublk_create_target", 00:41:05.570 "params": { 00:41:05.570 "cpumask": "1" 00:41:05.570 } 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "method": "ublk_start_disk", 00:41:05.570 "params": { 00:41:05.570 "bdev_name": "malloc0", 00:41:05.570 "ublk_id": 0, 00:41:05.570 "num_queues": 1, 00:41:05.570 "queue_depth": 128 00:41:05.570 } 00:41:05.570 } 00:41:05.570 ] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "nbd", 00:41:05.570 "config": [] 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "subsystem": "nvmf", 00:41:05.570 "config": [ 00:41:05.570 { 00:41:05.570 "method": "nvmf_set_config", 00:41:05.570 "params": { 00:41:05.570 "discovery_filter": "match_any", 00:41:05.570 "admin_cmd_passthru": { 00:41:05.570 "identify_ctrlr": false 00:41:05.570 }, 00:41:05.570 "dhchap_digests": [ 00:41:05.570 "sha256", 00:41:05.570 "sha384", 00:41:05.570 "sha512" 00:41:05.570 ], 00:41:05.570 "dhchap_dhgroups": [ 00:41:05.570 "null", 00:41:05.570 "ffdhe2048", 00:41:05.570 "ffdhe3072", 00:41:05.570 "ffdhe4096", 00:41:05.570 "ffdhe6144", 00:41:05.570 "ffdhe8192" 00:41:05.570 ] 00:41:05.570 } 00:41:05.570 }, 00:41:05.570 { 00:41:05.570 "method": "nvmf_set_max_subsystems", 00:41:05.570 "params": { 00:41:05.570 "max_subsystems": 1024 00:41:05.570 } 00:41:05.570 }, 00:41:05.571 { 00:41:05.571 "method": "nvmf_set_crdt", 00:41:05.571 "params": { 00:41:05.571 "crdt1": 0, 00:41:05.571 "crdt2": 0, 00:41:05.571 "crdt3": 0 00:41:05.571 } 00:41:05.571 } 00:41:05.571 ] 00:41:05.571 }, 00:41:05.571 { 00:41:05.571 "subsystem": "iscsi", 00:41:05.571 "config": [ 00:41:05.571 { 00:41:05.571 "method": "iscsi_set_options", 00:41:05.571 "params": { 00:41:05.571 "node_base": "iqn.2016-06.io.spdk", 00:41:05.571 "max_sessions": 128, 00:41:05.571 "max_connections_per_session": 2, 00:41:05.571 "max_queue_depth": 64, 00:41:05.571 "default_time2wait": 2, 00:41:05.571 "default_time2retain": 20, 00:41:05.571 "first_burst_length": 8192, 00:41:05.571 "immediate_data": true, 00:41:05.571 "allow_duplicated_isid": false, 00:41:05.571 "error_recovery_level": 0, 00:41:05.571 "nop_timeout": 60, 00:41:05.571 "nop_in_interval": 30, 00:41:05.571 "disable_chap": false, 00:41:05.571 "require_chap": false, 00:41:05.571 "mutual_chap": false, 00:41:05.571 "chap_group": 0, 00:41:05.571 "max_large_datain_per_connection": 64, 00:41:05.571 "max_r2t_per_connection": 4, 00:41:05.571 "pdu_pool_size": 36864, 00:41:05.571 "immediate_data_pool_size": 16384, 00:41:05.571 "data_out_pool_size": 2048 00:41:05.571 } 00:41:05.571 } 00:41:05.571 ] 00:41:05.571 } 00:41:05.571 ] 00:41:05.571 }' 00:41:05.571 13:35:58 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76161 00:41:05.571 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76161 ']' 00:41:05.571 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76161 00:41:05.571 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:41:05.571 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:05.571 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76161 00:41:05.571 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:05.571 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:05.571 killing process with pid 76161 00:41:05.571 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76161' 00:41:05.571 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76161 00:41:05.571 13:35:58 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76161 00:41:07.488 [2024-12-06 13:36:00.573911] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:41:07.745 [2024-12-06 13:36:00.604455] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:07.745 [2024-12-06 13:36:00.604653] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:41:07.745 [2024-12-06 13:36:00.617448] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:07.745 [2024-12-06 13:36:00.617516] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:41:07.745 [2024-12-06 13:36:00.617536] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:41:07.745 [2024-12-06 13:36:00.617590] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:41:07.745 [2024-12-06 13:36:00.617771] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:41:09.662 13:36:02 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76240 00:41:09.662 13:36:02 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76240 00:41:09.662 13:36:02 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76240 ']' 00:41:09.663 13:36:02 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:09.663 13:36:02 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:09.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:09.663 13:36:02 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:09.663 13:36:02 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:41:09.663 13:36:02 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:09.663 13:36:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:41:09.663 13:36:02 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:41:09.663 "subsystems": [ 00:41:09.663 { 00:41:09.663 "subsystem": "fsdev", 00:41:09.663 "config": [ 00:41:09.663 { 00:41:09.663 "method": "fsdev_set_opts", 00:41:09.663 "params": { 00:41:09.663 "fsdev_io_pool_size": 65535, 00:41:09.663 "fsdev_io_cache_size": 256 00:41:09.663 } 00:41:09.663 } 00:41:09.663 ] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "keyring", 00:41:09.663 "config": [] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "iobuf", 00:41:09.663 "config": [ 00:41:09.663 { 00:41:09.663 "method": "iobuf_set_options", 00:41:09.663 "params": { 00:41:09.663 "small_pool_count": 8192, 00:41:09.663 "large_pool_count": 1024, 00:41:09.663 "small_bufsize": 8192, 00:41:09.663 "large_bufsize": 135168, 00:41:09.663 "enable_numa": false 00:41:09.663 } 00:41:09.663 } 00:41:09.663 ] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "sock", 00:41:09.663 "config": [ 00:41:09.663 { 00:41:09.663 "method": "sock_set_default_impl", 00:41:09.663 "params": { 00:41:09.663 "impl_name": "posix" 00:41:09.663 } 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "method": "sock_impl_set_options", 00:41:09.663 "params": { 00:41:09.663 "impl_name": "ssl", 00:41:09.663 "recv_buf_size": 4096, 00:41:09.663 "send_buf_size": 4096, 00:41:09.663 "enable_recv_pipe": true, 00:41:09.663 "enable_quickack": false, 00:41:09.663 "enable_placement_id": 0, 00:41:09.663 "enable_zerocopy_send_server": true, 00:41:09.663 "enable_zerocopy_send_client": false, 00:41:09.663 "zerocopy_threshold": 0, 00:41:09.663 "tls_version": 0, 00:41:09.663 "enable_ktls": false 00:41:09.663 } 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "method": "sock_impl_set_options", 00:41:09.663 "params": { 00:41:09.663 "impl_name": "posix", 00:41:09.663 "recv_buf_size": 2097152, 00:41:09.663 "send_buf_size": 2097152, 00:41:09.663 "enable_recv_pipe": true, 00:41:09.663 "enable_quickack": false, 00:41:09.663 "enable_placement_id": 0, 00:41:09.663 "enable_zerocopy_send_server": true, 00:41:09.663 "enable_zerocopy_send_client": false, 00:41:09.663 "zerocopy_threshold": 0, 00:41:09.663 "tls_version": 0, 00:41:09.663 "enable_ktls": false 00:41:09.663 } 00:41:09.663 } 00:41:09.663 ] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "vmd", 00:41:09.663 "config": [] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "accel", 00:41:09.663 "config": [ 00:41:09.663 { 00:41:09.663 "method": "accel_set_options", 00:41:09.663 "params": { 00:41:09.663 "small_cache_size": 128, 00:41:09.663 "large_cache_size": 16, 00:41:09.663 "task_count": 2048, 00:41:09.663 "sequence_count": 2048, 00:41:09.663 "buf_count": 2048 00:41:09.663 } 00:41:09.663 } 00:41:09.663 ] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "bdev", 00:41:09.663 "config": [ 00:41:09.663 { 00:41:09.663 "method": "bdev_set_options", 00:41:09.663 "params": { 00:41:09.663 "bdev_io_pool_size": 65535, 00:41:09.663 "bdev_io_cache_size": 256, 00:41:09.663 "bdev_auto_examine": true, 00:41:09.663 "iobuf_small_cache_size": 128, 00:41:09.663 "iobuf_large_cache_size": 16 00:41:09.663 } 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "method": "bdev_raid_set_options", 00:41:09.663 "params": { 00:41:09.663 "process_window_size_kb": 1024, 00:41:09.663 "process_max_bandwidth_mb_sec": 0 00:41:09.663 } 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "method": "bdev_iscsi_set_options", 00:41:09.663 "params": { 00:41:09.663 "timeout_sec": 30 00:41:09.663 } 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "method": "bdev_nvme_set_options", 00:41:09.663 "params": { 00:41:09.663 "action_on_timeout": "none", 00:41:09.663 "timeout_us": 0, 00:41:09.663 "timeout_admin_us": 0, 00:41:09.663 "keep_alive_timeout_ms": 10000, 00:41:09.663 "arbitration_burst": 0, 00:41:09.663 "low_priority_weight": 0, 00:41:09.663 "medium_priority_weight": 0, 00:41:09.663 "high_priority_weight": 0, 00:41:09.663 "nvme_adminq_poll_period_us": 10000, 00:41:09.663 "nvme_ioq_poll_period_us": 0, 00:41:09.663 "io_queue_requests": 0, 00:41:09.663 "delay_cmd_submit": true, 00:41:09.663 "transport_retry_count": 4, 00:41:09.663 "bdev_retry_count": 3, 00:41:09.663 "transport_ack_timeout": 0, 00:41:09.663 "ctrlr_loss_timeout_sec": 0, 00:41:09.663 "reconnect_delay_sec": 0, 00:41:09.663 "fast_io_fail_timeout_sec": 0, 00:41:09.663 "disable_auto_failback": false, 00:41:09.663 "generate_uuids": false, 00:41:09.663 "transport_tos": 0, 00:41:09.663 "nvme_error_stat": false, 00:41:09.663 "rdma_srq_size": 0, 00:41:09.663 "io_path_stat": false, 00:41:09.663 "allow_accel_sequence": false, 00:41:09.663 "rdma_max_cq_size": 0, 00:41:09.663 "rdma_cm_event_timeout_ms": 0, 00:41:09.663 "dhchap_digests": [ 00:41:09.663 "sha256", 00:41:09.663 "sha384", 00:41:09.663 "sha512" 00:41:09.663 ], 00:41:09.663 "dhchap_dhgroups": [ 00:41:09.663 "null", 00:41:09.663 "ffdhe2048", 00:41:09.663 "ffdhe3072", 00:41:09.663 "ffdhe4096", 00:41:09.663 "ffdhe6144", 00:41:09.663 "ffdhe8192" 00:41:09.663 ] 00:41:09.663 } 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "method": "bdev_nvme_set_hotplug", 00:41:09.663 "params": { 00:41:09.663 "period_us": 100000, 00:41:09.663 "enable": false 00:41:09.663 } 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "method": "bdev_malloc_create", 00:41:09.663 "params": { 00:41:09.663 "name": "malloc0", 00:41:09.663 "num_blocks": 8192, 00:41:09.663 "block_size": 4096, 00:41:09.663 "physical_block_size": 4096, 00:41:09.663 "uuid": "ae1d533a-2e89-41a9-94fb-fde30ad8380b", 00:41:09.663 "optimal_io_boundary": 0, 00:41:09.663 "md_size": 0, 00:41:09.663 "dif_type": 0, 00:41:09.663 "dif_is_head_of_md": false, 00:41:09.663 "dif_pi_format": 0 00:41:09.663 } 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "method": "bdev_wait_for_examine" 00:41:09.663 } 00:41:09.663 ] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "scsi", 00:41:09.663 "config": null 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "scheduler", 00:41:09.663 "config": [ 00:41:09.663 { 00:41:09.663 "method": "framework_set_scheduler", 00:41:09.663 "params": { 00:41:09.663 "name": "static" 00:41:09.663 } 00:41:09.663 } 00:41:09.663 ] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "vhost_scsi", 00:41:09.663 "config": [] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "vhost_blk", 00:41:09.663 "config": [] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "ublk", 00:41:09.663 "config": [ 00:41:09.663 { 00:41:09.663 "method": "ublk_create_target", 00:41:09.663 "params": { 00:41:09.663 "cpumask": "1" 00:41:09.663 } 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "method": "ublk_start_disk", 00:41:09.663 "params": { 00:41:09.663 "bdev_name": "malloc0", 00:41:09.663 "ublk_id": 0, 00:41:09.663 "num_queues": 1, 00:41:09.663 "queue_depth": 128 00:41:09.663 } 00:41:09.663 } 00:41:09.663 ] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "nbd", 00:41:09.663 "config": [] 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "subsystem": "nvmf", 00:41:09.663 "config": [ 00:41:09.663 { 00:41:09.663 "method": "nvmf_set_config", 00:41:09.663 "params": { 00:41:09.663 "discovery_filter": "match_any", 00:41:09.663 "admin_cmd_passthru": { 00:41:09.663 "identify_ctrlr": false 00:41:09.663 }, 00:41:09.663 "dhchap_digests": [ 00:41:09.663 "sha256", 00:41:09.663 "sha384", 00:41:09.663 "sha512" 00:41:09.663 ], 00:41:09.663 "dhchap_dhgroups": [ 00:41:09.663 "null", 00:41:09.663 "ffdhe2048", 00:41:09.663 "ffdhe3072", 00:41:09.663 "ffdhe4096", 00:41:09.663 "ffdhe6144", 00:41:09.663 "ffdhe8192" 00:41:09.663 ] 00:41:09.663 } 00:41:09.663 }, 00:41:09.663 { 00:41:09.663 "method": "nvmf_set_max_subsystems", 00:41:09.664 "params": { 00:41:09.664 "max_subsystems": 1024 00:41:09.664 } 00:41:09.664 }, 00:41:09.664 { 00:41:09.664 "method": "nvmf_set_crdt", 00:41:09.664 "params": { 00:41:09.664 "crdt1": 0, 00:41:09.664 "crdt2": 0, 00:41:09.664 "crdt3": 0 00:41:09.664 } 00:41:09.664 } 00:41:09.664 ] 00:41:09.664 }, 00:41:09.664 { 00:41:09.664 "subsystem": "iscsi", 00:41:09.664 "config": [ 00:41:09.664 { 00:41:09.664 "method": "iscsi_set_options", 00:41:09.664 "params": { 00:41:09.664 "node_base": "iqn.2016-06.io.spdk", 00:41:09.664 "max_sessions": 128, 00:41:09.664 "max_connections_per_session": 2, 00:41:09.664 "max_queue_depth": 64, 00:41:09.664 "default_time2wait": 2, 00:41:09.664 "default_time2retain": 20, 00:41:09.664 "first_burst_length": 8192, 00:41:09.664 "immediate_data": true, 00:41:09.664 "allow_duplicated_isid": false, 00:41:09.664 "error_recovery_level": 0, 00:41:09.664 "nop_timeout": 60, 00:41:09.664 "nop_in_interval": 30, 00:41:09.664 "disable_chap": false, 00:41:09.664 "require_chap": false, 00:41:09.664 "mutual_chap": false, 00:41:09.664 "chap_group": 0, 00:41:09.664 "max_large_datain_per_connection": 64, 00:41:09.664 "max_r2t_per_connection": 4, 00:41:09.664 "pdu_pool_size": 36864, 00:41:09.664 "immediate_data_pool_size": 16384, 00:41:09.664 "data_out_pool_size": 2048 00:41:09.664 } 00:41:09.664 } 00:41:09.664 ] 00:41:09.664 } 00:41:09.664 ] 00:41:09.664 }' 00:41:09.923 [2024-12-06 13:36:02.864507] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:41:09.923 [2024-12-06 13:36:02.864704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76240 ] 00:41:10.182 [2024-12-06 13:36:03.044486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:10.182 [2024-12-06 13:36:03.193015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:11.558 [2024-12-06 13:36:04.449462] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:41:11.558 [2024-12-06 13:36:04.450869] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:11.558 [2024-12-06 13:36:04.457591] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:41:11.558 [2024-12-06 13:36:04.457730] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:41:11.558 [2024-12-06 13:36:04.457745] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:41:11.558 [2024-12-06 13:36:04.457754] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:41:11.558 [2024-12-06 13:36:04.465582] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:11.558 [2024-12-06 13:36:04.465608] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:11.558 [2024-12-06 13:36:04.472474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:11.558 [2024-12-06 13:36:04.472606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:41:11.558 [2024-12-06 13:36:04.489426] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76240 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76240 ']' 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76240 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76240 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:11.558 killing process with pid 76240 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76240' 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76240 00:41:11.558 13:36:04 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76240 00:41:13.476 [2024-12-06 13:36:06.322077] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:41:13.476 [2024-12-06 13:36:06.355516] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:13.476 [2024-12-06 13:36:06.355702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:41:13.476 [2024-12-06 13:36:06.360467] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:13.476 [2024-12-06 13:36:06.360529] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:41:13.476 [2024-12-06 13:36:06.360541] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:41:13.476 [2024-12-06 13:36:06.360571] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:41:13.476 [2024-12-06 13:36:06.360753] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:41:15.392 13:36:08 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:41:15.392 00:41:15.392 real 0m11.996s 00:41:15.392 user 0m9.009s 00:41:15.392 sys 0m3.942s 00:41:15.392 13:36:08 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:15.392 ************************************ 00:41:15.392 END TEST test_save_ublk_config 00:41:15.392 ************************************ 00:41:15.392 13:36:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:41:15.392 13:36:08 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76331 00:41:15.392 13:36:08 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:41:15.392 13:36:08 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:15.392 13:36:08 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76331 00:41:15.392 13:36:08 ublk -- common/autotest_common.sh@835 -- # '[' -z 76331 ']' 00:41:15.392 13:36:08 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:15.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:15.392 13:36:08 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:15.392 13:36:08 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:15.392 13:36:08 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:15.392 13:36:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:41:15.651 [2024-12-06 13:36:08.623547] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:41:15.651 [2024-12-06 13:36:08.623697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76331 ] 00:41:15.910 [2024-12-06 13:36:08.805702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:15.910 [2024-12-06 13:36:08.956386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.910 [2024-12-06 13:36:08.956454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:17.286 13:36:10 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:17.286 13:36:10 ublk -- common/autotest_common.sh@868 -- # return 0 00:41:17.286 13:36:10 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:41:17.286 13:36:10 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:17.286 13:36:10 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:17.286 13:36:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:41:17.286 ************************************ 00:41:17.286 START TEST test_create_ublk 00:41:17.286 ************************************ 00:41:17.286 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:41:17.286 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:41:17.286 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.286 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:17.286 [2024-12-06 13:36:10.065434] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:41:17.286 [2024-12-06 13:36:10.069127] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:17.286 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.286 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:41:17.286 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:41:17.286 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.286 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:17.545 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:41:17.545 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.545 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:17.545 [2024-12-06 13:36:10.435643] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:41:17.545 [2024-12-06 13:36:10.436220] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:41:17.545 [2024-12-06 13:36:10.436246] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:41:17.545 [2024-12-06 13:36:10.436258] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:41:17.545 [2024-12-06 13:36:10.447428] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:17.545 [2024-12-06 13:36:10.447456] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:17.545 [2024-12-06 13:36:10.451881] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:17.545 [2024-12-06 13:36:10.452729] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:41:17.545 [2024-12-06 13:36:10.464066] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:41:17.545 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:41:17.545 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.545 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:17.545 13:36:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:41:17.545 { 00:41:17.545 "ublk_device": "/dev/ublkb0", 00:41:17.545 "id": 0, 00:41:17.545 "queue_depth": 512, 00:41:17.545 "num_queues": 4, 00:41:17.545 "bdev_name": "Malloc0" 00:41:17.545 } 00:41:17.545 ]' 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:41:17.545 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:41:17.805 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:41:17.805 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:41:17.805 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:41:17.805 13:36:10 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:41:17.805 13:36:10 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:41:17.805 13:36:10 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:41:17.805 13:36:10 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:41:17.805 13:36:10 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:41:17.805 13:36:10 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:41:17.805 13:36:10 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:41:17.805 13:36:10 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:41:17.805 13:36:10 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:41:17.805 13:36:10 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:41:17.805 13:36:10 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:41:17.805 13:36:10 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:41:17.805 fio: verification read phase will never start because write phase uses all of runtime 00:41:17.805 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:41:17.805 fio-3.35 00:41:17.805 Starting 1 process 00:41:30.037 00:41:30.037 fio_test: (groupid=0, jobs=1): err= 0: pid=76389: Fri Dec 6 13:36:20 2024 00:41:30.037 write: IOPS=9384, BW=36.7MiB/s (38.4MB/s)(367MiB/10001msec); 0 zone resets 00:41:30.037 clat (usec): min=39, max=4104, avg=105.59, stdev=103.85 00:41:30.037 lat (usec): min=39, max=4105, avg=106.15, stdev=104.36 00:41:30.037 clat percentiles (usec): 00:41:30.037 | 1.00th=[ 42], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 60], 00:41:30.037 | 30.00th=[ 99], 40.00th=[ 109], 50.00th=[ 114], 60.00th=[ 117], 00:41:30.037 | 70.00th=[ 121], 80.00th=[ 125], 90.00th=[ 131], 95.00th=[ 137], 00:41:30.037 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 2114], 99.95th=[ 2868], 00:41:30.037 | 99.99th=[ 3490] 00:41:30.037 bw ( KiB/s): min=31632, max=67400, per=100.00%, avg=37850.32, stdev=12274.29, samples=19 00:41:30.037 iops : min= 7908, max=16850, avg=9462.58, stdev=3068.57, samples=19 00:41:30.037 lat (usec) : 50=4.42%, 100=26.42%, 250=68.91%, 500=0.04%, 750=0.02% 00:41:30.037 lat (usec) : 1000=0.02% 00:41:30.037 lat (msec) : 2=0.07%, 4=0.11%, 10=0.01% 00:41:30.037 cpu : usr=2.02%, sys=6.63%, ctx=93858, majf=0, minf=797 00:41:30.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.037 issued rwts: total=0,93859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:30.037 00:41:30.037 Run status group 0 (all jobs): 00:41:30.037 WRITE: bw=36.7MiB/s (38.4MB/s), 36.7MiB/s-36.7MiB/s (38.4MB/s-38.4MB/s), io=367MiB (384MB), run=10001-10001msec 00:41:30.037 00:41:30.037 Disk stats (read/write): 00:41:30.038 ublkb0: ios=0/93085, merge=0/0, ticks=0/9060, in_queue=9060, util=99.09% 00:41:30.038 13:36:20 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:41:30.038 13:36:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 [2024-12-06 13:36:20.981718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:41:30.038 [2024-12-06 13:36:21.027850] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:30.038 [2024-12-06 13:36:21.029325] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:41:30.038 [2024-12-06 13:36:21.034447] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:30.038 [2024-12-06 13:36:21.034760] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:41:30.038 [2024-12-06 13:36:21.034778] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.038 13:36:21 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 [2024-12-06 13:36:21.051523] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:41:30.038 request: 00:41:30.038 { 00:41:30.038 "ublk_id": 0, 00:41:30.038 "method": "ublk_stop_disk", 00:41:30.038 "req_id": 1 00:41:30.038 } 00:41:30.038 Got JSON-RPC error response 00:41:30.038 response: 00:41:30.038 { 00:41:30.038 "code": -19, 00:41:30.038 "message": "No such device" 00:41:30.038 } 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:30.038 13:36:21 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 [2024-12-06 13:36:21.068508] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:41:30.038 [2024-12-06 13:36:21.077404] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:41:30.038 [2024-12-06 13:36:21.077449] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.038 13:36:21 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.038 13:36:21 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:41:30.038 13:36:21 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.038 13:36:21 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:41:30.038 13:36:21 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:41:30.038 13:36:21 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:41:30.038 13:36:21 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.038 13:36:21 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:41:30.038 13:36:21 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:41:30.038 13:36:21 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:41:30.038 00:41:30.038 real 0m11.882s 00:41:30.038 user 0m0.576s 00:41:30.038 sys 0m0.813s 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:30.038 13:36:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 ************************************ 00:41:30.038 END TEST test_create_ublk 00:41:30.038 ************************************ 00:41:30.038 13:36:21 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:41:30.038 13:36:21 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:30.038 13:36:21 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:30.038 13:36:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 ************************************ 00:41:30.038 START TEST test_create_multi_ublk 00:41:30.038 ************************************ 00:41:30.038 13:36:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:41:30.038 13:36:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:41:30.038 13:36:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 [2024-12-06 13:36:22.008415] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:41:30.038 [2024-12-06 13:36:22.011209] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 [2024-12-06 13:36:22.296620] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:41:30.038 [2024-12-06 13:36:22.297100] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:41:30.038 [2024-12-06 13:36:22.297117] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:41:30.038 [2024-12-06 13:36:22.297131] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:41:30.038 [2024-12-06 13:36:22.305725] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:30.038 [2024-12-06 13:36:22.305756] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:30.038 [2024-12-06 13:36:22.312435] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:30.038 [2024-12-06 13:36:22.313024] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:41:30.038 [2024-12-06 13:36:22.321662] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 [2024-12-06 13:36:22.631574] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:41:30.038 [2024-12-06 13:36:22.632029] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:41:30.038 [2024-12-06 13:36:22.632048] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:41:30.038 [2024-12-06 13:36:22.632057] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:41:30.038 [2024-12-06 13:36:22.642464] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:30.038 [2024-12-06 13:36:22.642483] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:30.038 [2024-12-06 13:36:22.650432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:30.038 [2024-12-06 13:36:22.651017] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:41:30.038 [2024-12-06 13:36:22.667456] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.038 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.039 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:41:30.039 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:41:30.039 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.039 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.039 [2024-12-06 13:36:22.957566] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:41:30.039 [2024-12-06 13:36:22.958031] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:41:30.039 [2024-12-06 13:36:22.958048] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:41:30.039 [2024-12-06 13:36:22.958059] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:41:30.039 [2024-12-06 13:36:22.965458] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:30.039 [2024-12-06 13:36:22.965485] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:30.039 [2024-12-06 13:36:22.972417] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:30.039 [2024-12-06 13:36:22.973028] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:41:30.039 [2024-12-06 13:36:22.981488] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:41:30.039 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.039 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:41:30.039 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:30.039 13:36:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:41:30.039 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.039 13:36:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.298 [2024-12-06 13:36:23.271630] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:41:30.298 [2024-12-06 13:36:23.272108] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:41:30.298 [2024-12-06 13:36:23.272129] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:41:30.298 [2024-12-06 13:36:23.272138] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:41:30.298 [2024-12-06 13:36:23.279453] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:30.298 [2024-12-06 13:36:23.279475] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:30.298 [2024-12-06 13:36:23.287443] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:30.298 [2024-12-06 13:36:23.288096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:41:30.298 [2024-12-06 13:36:23.296518] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.298 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:41:30.298 { 00:41:30.298 "ublk_device": "/dev/ublkb0", 00:41:30.298 "id": 0, 00:41:30.298 "queue_depth": 512, 00:41:30.298 "num_queues": 4, 00:41:30.298 "bdev_name": "Malloc0" 00:41:30.298 }, 00:41:30.298 { 00:41:30.298 "ublk_device": "/dev/ublkb1", 00:41:30.298 "id": 1, 00:41:30.298 "queue_depth": 512, 00:41:30.299 "num_queues": 4, 00:41:30.299 "bdev_name": "Malloc1" 00:41:30.299 }, 00:41:30.299 { 00:41:30.299 "ublk_device": "/dev/ublkb2", 00:41:30.299 "id": 2, 00:41:30.299 "queue_depth": 512, 00:41:30.299 "num_queues": 4, 00:41:30.299 "bdev_name": "Malloc2" 00:41:30.299 }, 00:41:30.299 { 00:41:30.299 "ublk_device": "/dev/ublkb3", 00:41:30.299 "id": 3, 00:41:30.299 "queue_depth": 512, 00:41:30.299 "num_queues": 4, 00:41:30.299 "bdev_name": "Malloc3" 00:41:30.299 } 00:41:30.299 ]' 00:41:30.299 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:41:30.299 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:30.299 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:41:30.299 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:41:30.299 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:41:30.557 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:41:30.557 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:41:30.557 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:30.557 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:41:30.557 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:30.557 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:41:30.557 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:41:30.558 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:30.558 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:41:30.558 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:41:30.558 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:41:30.558 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:41:30.558 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:41:30.558 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:30.558 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:30.816 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:41:31.074 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:41:31.074 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:31.074 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:41:31.074 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:41:31.074 13:36:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.075 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:31.075 [2024-12-06 13:36:24.160568] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:41:31.333 [2024-12-06 13:36:24.202993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:31.333 [2024-12-06 13:36:24.208678] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:41:31.333 [2024-12-06 13:36:24.216485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:31.333 [2024-12-06 13:36:24.216866] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:41:31.333 [2024-12-06 13:36:24.216887] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:31.333 [2024-12-06 13:36:24.232572] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:41:31.333 [2024-12-06 13:36:24.270492] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:31.333 [2024-12-06 13:36:24.271668] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:41:31.333 [2024-12-06 13:36:24.274763] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:31.333 [2024-12-06 13:36:24.275135] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:41:31.333 [2024-12-06 13:36:24.275147] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:31.333 [2024-12-06 13:36:24.288559] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:41:31.333 [2024-12-06 13:36:24.327511] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:31.333 [2024-12-06 13:36:24.328645] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:41:31.333 [2024-12-06 13:36:24.337521] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:31.333 [2024-12-06 13:36:24.337911] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:41:31.333 [2024-12-06 13:36:24.337925] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:31.333 [2024-12-06 13:36:24.352579] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:41:31.333 [2024-12-06 13:36:24.391986] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:41:31.333 [2024-12-06 13:36:24.393277] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:41:31.333 [2024-12-06 13:36:24.400566] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:41:31.333 [2024-12-06 13:36:24.400924] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:41:31.333 [2024-12-06 13:36:24.400951] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.333 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:41:31.899 [2024-12-06 13:36:24.697551] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:41:31.899 [2024-12-06 13:36:24.705940] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:41:31.899 [2024-12-06 13:36:24.705983] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:41:31.899 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:41:31.899 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:31.899 13:36:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:41:31.899 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.899 13:36:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:32.464 13:36:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.464 13:36:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:32.464 13:36:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:41:32.464 13:36:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.464 13:36:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:33.029 13:36:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.029 13:36:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:33.029 13:36:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:41:33.029 13:36:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.029 13:36:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:33.286 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.286 13:36:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:41:33.286 13:36:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:41:33.286 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.286 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.544 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:33.802 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.802 13:36:26 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:41:33.802 13:36:26 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:41:33.802 ************************************ 00:41:33.802 END TEST test_create_multi_ublk 00:41:33.802 ************************************ 00:41:33.802 13:36:26 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:41:33.802 00:41:33.802 real 0m4.704s 00:41:33.802 user 0m1.063s 00:41:33.802 sys 0m0.260s 00:41:33.802 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:33.802 13:36:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:41:33.802 13:36:26 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:33.802 13:36:26 ublk -- ublk/ublk.sh@147 -- # cleanup 00:41:33.802 13:36:26 ublk -- ublk/ublk.sh@130 -- # killprocess 76331 00:41:33.802 13:36:26 ublk -- common/autotest_common.sh@954 -- # '[' -z 76331 ']' 00:41:33.802 13:36:26 ublk -- common/autotest_common.sh@958 -- # kill -0 76331 00:41:33.802 13:36:26 ublk -- common/autotest_common.sh@959 -- # uname 00:41:33.802 13:36:26 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:33.802 13:36:26 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76331 00:41:33.802 killing process with pid 76331 00:41:33.802 13:36:26 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:33.803 13:36:26 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:33.803 13:36:26 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76331' 00:41:33.803 13:36:26 ublk -- common/autotest_common.sh@973 -- # kill 76331 00:41:33.803 13:36:26 ublk -- common/autotest_common.sh@978 -- # wait 76331 00:41:35.178 [2024-12-06 13:36:27.989836] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:41:35.178 [2024-12-06 13:36:27.989922] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:41:36.554 ************************************ 00:41:36.554 END TEST ublk 00:41:36.554 ************************************ 00:41:36.554 00:41:36.554 real 0m33.114s 00:41:36.554 user 0m46.906s 00:41:36.554 sys 0m10.406s 00:41:36.554 13:36:29 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:36.554 13:36:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:41:36.554 13:36:29 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:41:36.554 13:36:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:36.554 13:36:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:36.554 13:36:29 -- common/autotest_common.sh@10 -- # set +x 00:41:36.554 ************************************ 00:41:36.554 START TEST ublk_recovery 00:41:36.554 ************************************ 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:41:36.554 * Looking for test storage... 00:41:36.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:36.554 13:36:29 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.554 --rc genhtml_branch_coverage=1 00:41:36.554 --rc genhtml_function_coverage=1 00:41:36.554 --rc genhtml_legend=1 00:41:36.554 --rc geninfo_all_blocks=1 00:41:36.554 --rc geninfo_unexecuted_blocks=1 00:41:36.554 00:41:36.554 ' 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.554 --rc genhtml_branch_coverage=1 00:41:36.554 --rc genhtml_function_coverage=1 00:41:36.554 --rc genhtml_legend=1 00:41:36.554 --rc geninfo_all_blocks=1 00:41:36.554 --rc geninfo_unexecuted_blocks=1 00:41:36.554 00:41:36.554 ' 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.554 --rc genhtml_branch_coverage=1 00:41:36.554 --rc genhtml_function_coverage=1 00:41:36.554 --rc genhtml_legend=1 00:41:36.554 --rc geninfo_all_blocks=1 00:41:36.554 --rc geninfo_unexecuted_blocks=1 00:41:36.554 00:41:36.554 ' 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.554 --rc genhtml_branch_coverage=1 00:41:36.554 --rc genhtml_function_coverage=1 00:41:36.554 --rc genhtml_legend=1 00:41:36.554 --rc geninfo_all_blocks=1 00:41:36.554 --rc geninfo_unexecuted_blocks=1 00:41:36.554 00:41:36.554 ' 00:41:36.554 13:36:29 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:41:36.554 13:36:29 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:41:36.554 13:36:29 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:41:36.554 13:36:29 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:41:36.554 13:36:29 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:41:36.554 13:36:29 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:41:36.554 13:36:29 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:41:36.554 13:36:29 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:41:36.554 13:36:29 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:41:36.554 13:36:29 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:41:36.554 13:36:29 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76768 00:41:36.554 13:36:29 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:41:36.554 13:36:29 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:36.554 13:36:29 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76768 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76768 ']' 00:41:36.554 13:36:29 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:36.555 13:36:29 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:36.555 13:36:29 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:36.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:36.555 13:36:29 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:36.555 13:36:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:36.813 [2024-12-06 13:36:29.707488] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:41:36.813 [2024-12-06 13:36:29.708306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76768 ] 00:41:36.813 [2024-12-06 13:36:29.909067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:37.072 [2024-12-06 13:36:30.030742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:37.072 [2024-12-06 13:36:30.030777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:38.055 13:36:30 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:38.055 13:36:30 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:41:38.055 13:36:30 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:41:38.055 13:36:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.055 13:36:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:38.055 [2024-12-06 13:36:30.897423] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:41:38.055 [2024-12-06 13:36:30.900202] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:38.055 13:36:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.055 13:36:30 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:41:38.055 13:36:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.055 13:36:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:38.055 malloc0 00:41:38.055 13:36:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.055 13:36:31 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:41:38.055 13:36:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.055 13:36:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:38.055 [2024-12-06 13:36:31.048584] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:41:38.055 [2024-12-06 13:36:31.048707] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:41:38.055 [2024-12-06 13:36:31.048723] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:41:38.055 [2024-12-06 13:36:31.048732] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:41:38.055 [2024-12-06 13:36:31.057530] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:41:38.055 [2024-12-06 13:36:31.057558] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:41:38.055 [2024-12-06 13:36:31.064437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:41:38.055 [2024-12-06 13:36:31.064615] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:41:38.055 [2024-12-06 13:36:31.081470] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:41:38.055 1 00:41:38.055 13:36:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.055 13:36:31 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:41:39.429 13:36:32 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76803 00:41:39.429 13:36:32 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:41:39.429 13:36:32 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:41:39.429 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:39.429 fio-3.35 00:41:39.429 Starting 1 process 00:41:44.689 13:36:37 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76768 00:41:44.689 13:36:37 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:41:49.964 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76768 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:41:49.964 13:36:42 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76909 00:41:49.964 13:36:42 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:49.964 13:36:42 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76909 00:41:49.964 13:36:42 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76909 ']' 00:41:49.964 13:36:42 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:49.964 13:36:42 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:49.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:49.964 13:36:42 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:49.964 13:36:42 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:49.964 13:36:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:49.964 13:36:42 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:41:49.964 [2024-12-06 13:36:42.262818] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:41:49.964 [2024-12-06 13:36:42.262996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76909 ] 00:41:49.964 [2024-12-06 13:36:42.455818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:49.964 [2024-12-06 13:36:42.642823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:49.964 [2024-12-06 13:36:42.642859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:50.903 13:36:43 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:50.903 13:36:43 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:41:50.903 13:36:43 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:41:50.903 13:36:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.903 13:36:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:50.903 [2024-12-06 13:36:43.736435] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:41:50.903 [2024-12-06 13:36:43.740007] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:41:50.903 13:36:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.903 13:36:43 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:41:50.903 13:36:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.903 13:36:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:50.903 malloc0 00:41:50.903 13:36:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.903 13:36:43 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:41:50.903 13:36:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.903 13:36:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:41:50.903 [2024-12-06 13:36:43.927654] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:41:50.903 [2024-12-06 13:36:43.927712] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:41:50.903 [2024-12-06 13:36:43.927726] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:41:50.903 [2024-12-06 13:36:43.935460] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:41:50.903 [2024-12-06 13:36:43.935491] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:41:50.903 [2024-12-06 13:36:43.935502] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:41:50.903 [2024-12-06 13:36:43.935627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:41:50.903 1 00:41:50.903 13:36:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.903 13:36:43 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76803 00:41:50.903 [2024-12-06 13:36:43.943447] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:41:50.903 [2024-12-06 13:36:43.951006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:41:50.903 [2024-12-06 13:36:43.958660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:41:50.903 [2024-12-06 13:36:43.958689] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:42:47.169 00:42:47.169 fio_test: (groupid=0, jobs=1): err= 0: pid=76806: Fri Dec 6 13:37:32 2024 00:42:47.169 read: IOPS=21.0k, BW=81.9MiB/s (85.9MB/s)(4915MiB/60002msec) 00:42:47.169 slat (usec): min=2, max=1545, avg= 6.39, stdev= 2.71 00:42:47.170 clat (usec): min=1162, max=6867.0k, avg=2987.77, stdev=48565.99 00:42:47.170 lat (usec): min=1168, max=6867.0k, avg=2994.16, stdev=48566.00 00:42:47.170 clat percentiles (usec): 00:42:47.170 | 1.00th=[ 2147], 5.00th=[ 2311], 10.00th=[ 2343], 20.00th=[ 2376], 00:42:47.170 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2540], 00:42:47.170 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2933], 95.00th=[ 3785], 00:42:47.170 | 99.00th=[ 5211], 99.50th=[ 5997], 99.90th=[ 7635], 99.95th=[ 8455], 00:42:47.170 | 99.99th=[13566] 00:42:47.170 bw ( KiB/s): min=33280, max=101544, per=100.00%, avg=94137.99, stdev=10221.63, samples=106 00:42:47.170 iops : min= 8320, max=25386, avg=23534.48, stdev=2555.40, samples=106 00:42:47.170 write: IOPS=21.0k, BW=81.8MiB/s (85.8MB/s)(4911MiB/60002msec); 0 zone resets 00:42:47.170 slat (usec): min=2, max=493, avg= 6.45, stdev= 2.17 00:42:47.170 clat (usec): min=1040, max=6867.4k, avg=3104.69, stdev=49356.13 00:42:47.170 lat (usec): min=1046, max=6867.4k, avg=3111.14, stdev=49356.14 00:42:47.170 clat percentiles (usec): 00:42:47.170 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2507], 00:42:47.170 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:42:47.170 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2966], 95.00th=[ 3752], 00:42:47.170 | 99.00th=[ 5211], 99.50th=[ 6128], 99.90th=[ 7701], 99.95th=[ 8586], 00:42:47.170 | 99.99th=[13304] 00:42:47.170 bw ( KiB/s): min=32560, max=101384, per=100.00%, avg=94052.40, stdev=10223.23, samples=106 00:42:47.170 iops : min= 8140, max=25346, avg=23513.08, stdev=2555.80, samples=106 00:42:47.170 lat (msec) : 2=0.42%, 4=95.40%, 10=4.16%, 20=0.01%, >=2000=0.01% 00:42:47.170 cpu : usr=9.17%, sys=26.88%, ctx=85271, majf=0, minf=13 00:42:47.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:42:47.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:47.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:47.170 issued rwts: total=1258302,1257159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:47.170 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:47.170 00:42:47.170 Run status group 0 (all jobs): 00:42:47.170 READ: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=4915MiB (5154MB), run=60002-60002msec 00:42:47.170 WRITE: bw=81.8MiB/s (85.8MB/s), 81.8MiB/s-81.8MiB/s (85.8MB/s-85.8MB/s), io=4911MiB (5149MB), run=60002-60002msec 00:42:47.170 00:42:47.170 Disk stats (read/write): 00:42:47.170 ublkb1: ios=1255370/1254362, merge=0/0, ticks=3666890/3671166, in_queue=7338057, util=99.93% 00:42:47.170 13:37:32 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:47.170 [2024-12-06 13:37:32.361048] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:42:47.170 [2024-12-06 13:37:32.395462] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:42:47.170 [2024-12-06 13:37:32.395678] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:42:47.170 [2024-12-06 13:37:32.403439] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:42:47.170 [2024-12-06 13:37:32.403581] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:42:47.170 [2024-12-06 13:37:32.403598] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.170 13:37:32 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:47.170 [2024-12-06 13:37:32.419555] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:42:47.170 [2024-12-06 13:37:32.427418] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:42:47.170 [2024-12-06 13:37:32.427460] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.170 13:37:32 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:47.170 13:37:32 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:42:47.170 13:37:32 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76909 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76909 ']' 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76909 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76909 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:47.170 killing process with pid 76909 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76909' 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76909 00:42:47.170 13:37:32 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76909 00:42:47.170 [2024-12-06 13:37:34.249179] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:42:47.170 [2024-12-06 13:37:34.249251] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:42:47.170 00:42:47.170 real 1m6.511s 00:42:47.170 user 1m50.789s 00:42:47.170 sys 0m32.785s 00:42:47.170 13:37:35 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:47.170 13:37:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:42:47.170 ************************************ 00:42:47.170 END TEST ublk_recovery 00:42:47.170 ************************************ 00:42:47.170 13:37:35 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:42:47.170 13:37:35 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:42:47.170 13:37:35 -- spdk/autotest.sh@260 -- # timing_exit lib 00:42:47.170 13:37:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:47.170 13:37:35 -- common/autotest_common.sh@10 -- # set +x 00:42:47.170 13:37:35 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:42:47.170 13:37:35 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:42:47.170 13:37:35 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:42:47.170 13:37:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:47.170 13:37:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:47.170 13:37:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:42:47.170 13:37:35 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:42:47.170 13:37:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:42:47.170 13:37:35 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:47.170 13:37:35 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:42:47.170 13:37:35 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:42:47.170 13:37:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:47.170 13:37:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:47.170 13:37:35 -- common/autotest_common.sh@10 -- # set +x 00:42:47.170 ************************************ 00:42:47.170 START TEST ftl 00:42:47.170 ************************************ 00:42:47.170 13:37:35 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:42:47.170 * Looking for test storage... 00:42:47.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:42:47.170 13:37:36 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:47.170 13:37:36 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:42:47.170 13:37:36 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:47.170 13:37:36 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:47.170 13:37:36 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:47.170 13:37:36 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:47.170 13:37:36 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:47.170 13:37:36 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:42:47.170 13:37:36 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:42:47.170 13:37:36 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:42:47.170 13:37:36 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:42:47.170 13:37:36 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:42:47.170 13:37:36 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:42:47.170 13:37:36 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:42:47.170 13:37:36 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:47.170 13:37:36 ftl -- scripts/common.sh@344 -- # case "$op" in 00:42:47.170 13:37:36 ftl -- scripts/common.sh@345 -- # : 1 00:42:47.170 13:37:36 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:47.170 13:37:36 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:47.170 13:37:36 ftl -- scripts/common.sh@365 -- # decimal 1 00:42:47.170 13:37:36 ftl -- scripts/common.sh@353 -- # local d=1 00:42:47.170 13:37:36 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:47.170 13:37:36 ftl -- scripts/common.sh@355 -- # echo 1 00:42:47.170 13:37:36 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:42:47.170 13:37:36 ftl -- scripts/common.sh@366 -- # decimal 2 00:42:47.170 13:37:36 ftl -- scripts/common.sh@353 -- # local d=2 00:42:47.170 13:37:36 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:47.170 13:37:36 ftl -- scripts/common.sh@355 -- # echo 2 00:42:47.170 13:37:36 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:42:47.170 13:37:36 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:47.170 13:37:36 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:47.170 13:37:36 ftl -- scripts/common.sh@368 -- # return 0 00:42:47.170 13:37:36 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:47.170 13:37:36 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.170 --rc genhtml_branch_coverage=1 00:42:47.170 --rc genhtml_function_coverage=1 00:42:47.170 --rc genhtml_legend=1 00:42:47.170 --rc geninfo_all_blocks=1 00:42:47.170 --rc geninfo_unexecuted_blocks=1 00:42:47.170 00:42:47.170 ' 00:42:47.170 13:37:36 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.170 --rc genhtml_branch_coverage=1 00:42:47.170 --rc genhtml_function_coverage=1 00:42:47.170 --rc genhtml_legend=1 00:42:47.170 --rc geninfo_all_blocks=1 00:42:47.170 --rc geninfo_unexecuted_blocks=1 00:42:47.170 00:42:47.170 ' 00:42:47.170 13:37:36 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.170 --rc genhtml_branch_coverage=1 00:42:47.171 --rc genhtml_function_coverage=1 00:42:47.171 --rc genhtml_legend=1 00:42:47.171 --rc geninfo_all_blocks=1 00:42:47.171 --rc geninfo_unexecuted_blocks=1 00:42:47.171 00:42:47.171 ' 00:42:47.171 13:37:36 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:47.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:47.171 --rc genhtml_branch_coverage=1 00:42:47.171 --rc genhtml_function_coverage=1 00:42:47.171 --rc genhtml_legend=1 00:42:47.171 --rc geninfo_all_blocks=1 00:42:47.171 --rc geninfo_unexecuted_blocks=1 00:42:47.171 00:42:47.171 ' 00:42:47.171 13:37:36 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:42:47.171 13:37:36 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:42:47.171 13:37:36 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:42:47.171 13:37:36 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:42:47.171 13:37:36 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:42:47.171 13:37:36 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:47.171 13:37:36 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:47.171 13:37:36 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:42:47.171 13:37:36 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:42:47.171 13:37:36 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:47.171 13:37:36 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:47.171 13:37:36 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:42:47.171 13:37:36 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:42:47.171 13:37:36 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:47.171 13:37:36 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:47.171 13:37:36 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:42:47.171 13:37:36 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:42:47.171 13:37:36 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:47.171 13:37:36 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:47.171 13:37:36 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:42:47.171 13:37:36 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:42:47.171 13:37:36 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:47.171 13:37:36 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:47.171 13:37:36 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:47.171 13:37:36 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:47.171 13:37:36 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:42:47.171 13:37:36 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:42:47.171 13:37:36 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:47.171 13:37:36 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:47.171 13:37:36 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:47.171 13:37:36 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:42:47.171 13:37:36 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:42:47.171 13:37:36 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:42:47.171 13:37:36 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:42:47.171 13:37:36 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:42:47.171 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:47.171 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:47.171 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:47.171 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:47.171 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:47.171 13:37:36 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77708 00:42:47.171 13:37:36 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:42:47.171 13:37:36 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77708 00:42:47.171 13:37:36 ftl -- common/autotest_common.sh@835 -- # '[' -z 77708 ']' 00:42:47.171 13:37:36 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:47.171 13:37:36 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:47.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:47.171 13:37:36 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:47.171 13:37:36 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:47.171 13:37:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:47.171 [2024-12-06 13:37:37.034114] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:42:47.171 [2024-12-06 13:37:37.034305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77708 ] 00:42:47.171 [2024-12-06 13:37:37.243275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:47.171 [2024-12-06 13:37:37.440965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.171 13:37:37 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:47.171 13:37:37 ftl -- common/autotest_common.sh@868 -- # return 0 00:42:47.171 13:37:37 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:42:47.171 13:37:38 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:42:47.171 13:37:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:42:47.171 13:37:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:42:47.171 13:37:39 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:42:47.171 13:37:39 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:42:47.171 13:37:39 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:42:47.171 13:37:40 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:42:47.171 13:37:40 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:42:47.171 13:37:40 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:42:47.171 13:37:40 ftl -- ftl/ftl.sh@50 -- # break 00:42:47.171 13:37:40 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:42:47.171 13:37:40 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:42:47.171 13:37:40 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:42:47.171 13:37:40 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:42:47.429 13:37:40 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:42:47.429 13:37:40 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:42:47.429 13:37:40 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:42:47.429 13:37:40 ftl -- ftl/ftl.sh@63 -- # break 00:42:47.429 13:37:40 ftl -- ftl/ftl.sh@66 -- # killprocess 77708 00:42:47.429 13:37:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 77708 ']' 00:42:47.429 13:37:40 ftl -- common/autotest_common.sh@958 -- # kill -0 77708 00:42:47.429 13:37:40 ftl -- common/autotest_common.sh@959 -- # uname 00:42:47.429 13:37:40 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:47.429 13:37:40 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77708 00:42:47.429 killing process with pid 77708 00:42:47.429 13:37:40 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:47.429 13:37:40 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:47.429 13:37:40 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77708' 00:42:47.429 13:37:40 ftl -- common/autotest_common.sh@973 -- # kill 77708 00:42:47.429 13:37:40 ftl -- common/autotest_common.sh@978 -- # wait 77708 00:42:50.032 13:37:43 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:42:50.032 13:37:43 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:42:50.032 13:37:43 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:42:50.033 13:37:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:50.033 13:37:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:50.033 ************************************ 00:42:50.033 START TEST ftl_fio_basic 00:42:50.033 ************************************ 00:42:50.033 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:42:50.293 * Looking for test storage... 00:42:50.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:50.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:50.293 --rc genhtml_branch_coverage=1 00:42:50.293 --rc genhtml_function_coverage=1 00:42:50.293 --rc genhtml_legend=1 00:42:50.293 --rc geninfo_all_blocks=1 00:42:50.293 --rc geninfo_unexecuted_blocks=1 00:42:50.293 00:42:50.293 ' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:50.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:50.293 --rc genhtml_branch_coverage=1 00:42:50.293 --rc genhtml_function_coverage=1 00:42:50.293 --rc genhtml_legend=1 00:42:50.293 --rc geninfo_all_blocks=1 00:42:50.293 --rc geninfo_unexecuted_blocks=1 00:42:50.293 00:42:50.293 ' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:50.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:50.293 --rc genhtml_branch_coverage=1 00:42:50.293 --rc genhtml_function_coverage=1 00:42:50.293 --rc genhtml_legend=1 00:42:50.293 --rc geninfo_all_blocks=1 00:42:50.293 --rc geninfo_unexecuted_blocks=1 00:42:50.293 00:42:50.293 ' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:50.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:50.293 --rc genhtml_branch_coverage=1 00:42:50.293 --rc genhtml_function_coverage=1 00:42:50.293 --rc genhtml_legend=1 00:42:50.293 --rc geninfo_all_blocks=1 00:42:50.293 --rc geninfo_unexecuted_blocks=1 00:42:50.293 00:42:50.293 ' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:50.293 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:50.294 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:42:50.294 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77867 00:42:50.294 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77867 00:42:50.294 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77867 ']' 00:42:50.294 13:37:43 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:42:50.294 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:50.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:50.294 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:50.294 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:50.294 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:50.294 13:37:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:42:50.553 [2024-12-06 13:37:43.523868] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:42:50.553 [2024-12-06 13:37:43.524068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77867 ] 00:42:50.812 [2024-12-06 13:37:43.724385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:50.812 [2024-12-06 13:37:43.870217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:50.812 [2024-12-06 13:37:43.870348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:50.812 [2024-12-06 13:37:43.870383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:52.185 13:37:44 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:52.186 13:37:44 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:42:52.186 13:37:44 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:42:52.186 13:37:44 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:42:52.186 13:37:44 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:42:52.186 13:37:44 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:42:52.186 13:37:44 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:42:52.186 13:37:44 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:42:52.186 13:37:45 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:42:52.186 13:37:45 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:42:52.443 13:37:45 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:42:52.443 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:42:52.443 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:52.443 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:42:52.443 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:42:52.443 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:42:52.443 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:52.443 { 00:42:52.443 "name": "nvme0n1", 00:42:52.443 "aliases": [ 00:42:52.443 "30941722-48a2-4f16-9bb7-174e755a885c" 00:42:52.443 ], 00:42:52.443 "product_name": "NVMe disk", 00:42:52.443 "block_size": 4096, 00:42:52.443 "num_blocks": 1310720, 00:42:52.443 "uuid": "30941722-48a2-4f16-9bb7-174e755a885c", 00:42:52.443 "numa_id": -1, 00:42:52.443 "assigned_rate_limits": { 00:42:52.443 "rw_ios_per_sec": 0, 00:42:52.443 "rw_mbytes_per_sec": 0, 00:42:52.443 "r_mbytes_per_sec": 0, 00:42:52.443 "w_mbytes_per_sec": 0 00:42:52.443 }, 00:42:52.443 "claimed": false, 00:42:52.443 "zoned": false, 00:42:52.443 "supported_io_types": { 00:42:52.443 "read": true, 00:42:52.443 "write": true, 00:42:52.443 "unmap": true, 00:42:52.443 "flush": true, 00:42:52.443 "reset": true, 00:42:52.443 "nvme_admin": true, 00:42:52.443 "nvme_io": true, 00:42:52.443 "nvme_io_md": false, 00:42:52.443 "write_zeroes": true, 00:42:52.443 "zcopy": false, 00:42:52.443 "get_zone_info": false, 00:42:52.443 "zone_management": false, 00:42:52.443 "zone_append": false, 00:42:52.443 "compare": true, 00:42:52.443 "compare_and_write": false, 00:42:52.443 "abort": true, 00:42:52.443 "seek_hole": false, 00:42:52.443 "seek_data": false, 00:42:52.443 "copy": true, 00:42:52.443 "nvme_iov_md": false 00:42:52.443 }, 00:42:52.443 "driver_specific": { 00:42:52.443 "nvme": [ 00:42:52.443 { 00:42:52.443 "pci_address": "0000:00:11.0", 00:42:52.443 "trid": { 00:42:52.443 "trtype": "PCIe", 00:42:52.443 "traddr": "0000:00:11.0" 00:42:52.443 }, 00:42:52.443 "ctrlr_data": { 00:42:52.443 "cntlid": 0, 00:42:52.443 "vendor_id": "0x1b36", 00:42:52.443 "model_number": "QEMU NVMe Ctrl", 00:42:52.443 "serial_number": "12341", 00:42:52.443 "firmware_revision": "8.0.0", 00:42:52.443 "subnqn": "nqn.2019-08.org.qemu:12341", 00:42:52.443 "oacs": { 00:42:52.443 "security": 0, 00:42:52.443 "format": 1, 00:42:52.443 "firmware": 0, 00:42:52.443 "ns_manage": 1 00:42:52.443 }, 00:42:52.443 "multi_ctrlr": false, 00:42:52.443 "ana_reporting": false 00:42:52.443 }, 00:42:52.443 "vs": { 00:42:52.443 "nvme_version": "1.4" 00:42:52.443 }, 00:42:52.443 "ns_data": { 00:42:52.443 "id": 1, 00:42:52.443 "can_share": false 00:42:52.443 } 00:42:52.443 } 00:42:52.443 ], 00:42:52.443 "mp_policy": "active_passive" 00:42:52.443 } 00:42:52.443 } 00:42:52.443 ]' 00:42:52.443 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:52.701 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:42:52.701 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:52.701 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:42:52.701 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:42:52.701 13:37:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:42:52.701 13:37:45 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:42:52.701 13:37:45 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:42:52.701 13:37:45 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:42:52.701 13:37:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:52.701 13:37:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:42:52.958 13:37:45 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:42:52.958 13:37:45 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:42:53.216 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=7d8d6e6d-2e85-4232-85f8-4f8665002e36 00:42:53.216 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7d8d6e6d-2e85-4232-85f8-4f8665002e36 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:42:53.474 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:53.732 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:53.732 { 00:42:53.732 "name": "b331eeb5-e10c-4e81-aa1d-c29a185c7807", 00:42:53.732 "aliases": [ 00:42:53.732 "lvs/nvme0n1p0" 00:42:53.732 ], 00:42:53.732 "product_name": "Logical Volume", 00:42:53.732 "block_size": 4096, 00:42:53.732 "num_blocks": 26476544, 00:42:53.732 "uuid": "b331eeb5-e10c-4e81-aa1d-c29a185c7807", 00:42:53.732 "assigned_rate_limits": { 00:42:53.732 "rw_ios_per_sec": 0, 00:42:53.732 "rw_mbytes_per_sec": 0, 00:42:53.732 "r_mbytes_per_sec": 0, 00:42:53.732 "w_mbytes_per_sec": 0 00:42:53.732 }, 00:42:53.732 "claimed": false, 00:42:53.732 "zoned": false, 00:42:53.732 "supported_io_types": { 00:42:53.732 "read": true, 00:42:53.732 "write": true, 00:42:53.732 "unmap": true, 00:42:53.732 "flush": false, 00:42:53.732 "reset": true, 00:42:53.732 "nvme_admin": false, 00:42:53.732 "nvme_io": false, 00:42:53.732 "nvme_io_md": false, 00:42:53.732 "write_zeroes": true, 00:42:53.732 "zcopy": false, 00:42:53.732 "get_zone_info": false, 00:42:53.732 "zone_management": false, 00:42:53.732 "zone_append": false, 00:42:53.732 "compare": false, 00:42:53.732 "compare_and_write": false, 00:42:53.732 "abort": false, 00:42:53.732 "seek_hole": true, 00:42:53.732 "seek_data": true, 00:42:53.732 "copy": false, 00:42:53.732 "nvme_iov_md": false 00:42:53.732 }, 00:42:53.733 "driver_specific": { 00:42:53.733 "lvol": { 00:42:53.733 "lvol_store_uuid": "7d8d6e6d-2e85-4232-85f8-4f8665002e36", 00:42:53.733 "base_bdev": "nvme0n1", 00:42:53.733 "thin_provision": true, 00:42:53.733 "num_allocated_clusters": 0, 00:42:53.733 "snapshot": false, 00:42:53.733 "clone": false, 00:42:53.733 "esnap_clone": false 00:42:53.733 } 00:42:53.733 } 00:42:53.733 } 00:42:53.733 ]' 00:42:53.733 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:53.733 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:42:53.733 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:53.733 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:53.733 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:53.733 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:42:53.733 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:42:53.733 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:42:53.733 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:42:53.991 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:42:53.991 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:42:53.991 13:37:46 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:53.991 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:53.991 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:53.991 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:42:53.991 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:42:53.991 13:37:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:54.250 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:54.250 { 00:42:54.250 "name": "b331eeb5-e10c-4e81-aa1d-c29a185c7807", 00:42:54.250 "aliases": [ 00:42:54.250 "lvs/nvme0n1p0" 00:42:54.250 ], 00:42:54.250 "product_name": "Logical Volume", 00:42:54.250 "block_size": 4096, 00:42:54.250 "num_blocks": 26476544, 00:42:54.250 "uuid": "b331eeb5-e10c-4e81-aa1d-c29a185c7807", 00:42:54.250 "assigned_rate_limits": { 00:42:54.250 "rw_ios_per_sec": 0, 00:42:54.250 "rw_mbytes_per_sec": 0, 00:42:54.250 "r_mbytes_per_sec": 0, 00:42:54.250 "w_mbytes_per_sec": 0 00:42:54.250 }, 00:42:54.250 "claimed": false, 00:42:54.250 "zoned": false, 00:42:54.250 "supported_io_types": { 00:42:54.250 "read": true, 00:42:54.250 "write": true, 00:42:54.250 "unmap": true, 00:42:54.250 "flush": false, 00:42:54.250 "reset": true, 00:42:54.250 "nvme_admin": false, 00:42:54.250 "nvme_io": false, 00:42:54.250 "nvme_io_md": false, 00:42:54.250 "write_zeroes": true, 00:42:54.250 "zcopy": false, 00:42:54.250 "get_zone_info": false, 00:42:54.250 "zone_management": false, 00:42:54.250 "zone_append": false, 00:42:54.250 "compare": false, 00:42:54.250 "compare_and_write": false, 00:42:54.250 "abort": false, 00:42:54.250 "seek_hole": true, 00:42:54.250 "seek_data": true, 00:42:54.250 "copy": false, 00:42:54.250 "nvme_iov_md": false 00:42:54.250 }, 00:42:54.250 "driver_specific": { 00:42:54.250 "lvol": { 00:42:54.250 "lvol_store_uuid": "7d8d6e6d-2e85-4232-85f8-4f8665002e36", 00:42:54.250 "base_bdev": "nvme0n1", 00:42:54.250 "thin_provision": true, 00:42:54.250 "num_allocated_clusters": 0, 00:42:54.250 "snapshot": false, 00:42:54.250 "clone": false, 00:42:54.250 "esnap_clone": false 00:42:54.250 } 00:42:54.250 } 00:42:54.250 } 00:42:54.250 ]' 00:42:54.250 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:54.250 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:42:54.250 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:54.250 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:54.250 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:54.250 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:42:54.250 13:37:47 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:42:54.250 13:37:47 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:42:54.508 13:37:47 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:42:54.508 13:37:47 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:42:54.508 13:37:47 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:42:54.508 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:42:54.508 13:37:47 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:54.509 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:54.509 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:54.509 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:42:54.509 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:42:54.509 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b331eeb5-e10c-4e81-aa1d-c29a185c7807 00:42:54.768 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:54.768 { 00:42:54.768 "name": "b331eeb5-e10c-4e81-aa1d-c29a185c7807", 00:42:54.768 "aliases": [ 00:42:54.768 "lvs/nvme0n1p0" 00:42:54.768 ], 00:42:54.768 "product_name": "Logical Volume", 00:42:54.768 "block_size": 4096, 00:42:54.768 "num_blocks": 26476544, 00:42:54.768 "uuid": "b331eeb5-e10c-4e81-aa1d-c29a185c7807", 00:42:54.768 "assigned_rate_limits": { 00:42:54.768 "rw_ios_per_sec": 0, 00:42:54.768 "rw_mbytes_per_sec": 0, 00:42:54.768 "r_mbytes_per_sec": 0, 00:42:54.768 "w_mbytes_per_sec": 0 00:42:54.768 }, 00:42:54.768 "claimed": false, 00:42:54.768 "zoned": false, 00:42:54.768 "supported_io_types": { 00:42:54.768 "read": true, 00:42:54.768 "write": true, 00:42:54.768 "unmap": true, 00:42:54.768 "flush": false, 00:42:54.768 "reset": true, 00:42:54.768 "nvme_admin": false, 00:42:54.768 "nvme_io": false, 00:42:54.768 "nvme_io_md": false, 00:42:54.768 "write_zeroes": true, 00:42:54.768 "zcopy": false, 00:42:54.768 "get_zone_info": false, 00:42:54.768 "zone_management": false, 00:42:54.768 "zone_append": false, 00:42:54.768 "compare": false, 00:42:54.768 "compare_and_write": false, 00:42:54.768 "abort": false, 00:42:54.768 "seek_hole": true, 00:42:54.768 "seek_data": true, 00:42:54.768 "copy": false, 00:42:54.768 "nvme_iov_md": false 00:42:54.768 }, 00:42:54.768 "driver_specific": { 00:42:54.768 "lvol": { 00:42:54.768 "lvol_store_uuid": "7d8d6e6d-2e85-4232-85f8-4f8665002e36", 00:42:54.768 "base_bdev": "nvme0n1", 00:42:54.768 "thin_provision": true, 00:42:54.768 "num_allocated_clusters": 0, 00:42:54.768 "snapshot": false, 00:42:54.768 "clone": false, 00:42:54.768 "esnap_clone": false 00:42:54.768 } 00:42:54.768 } 00:42:54.768 } 00:42:54.768 ]' 00:42:54.768 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:54.768 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:42:54.768 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:54.768 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:54.768 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:54.768 13:37:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:42:54.768 13:37:47 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:42:54.768 13:37:47 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:42:54.768 13:37:47 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b331eeb5-e10c-4e81-aa1d-c29a185c7807 -c nvc0n1p0 --l2p_dram_limit 60 00:42:55.027 [2024-12-06 13:37:48.121517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.027 [2024-12-06 13:37:48.121800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:55.027 [2024-12-06 13:37:48.121835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:55.027 [2024-12-06 13:37:48.121848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.027 [2024-12-06 13:37:48.121970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.027 [2024-12-06 13:37:48.122006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:55.027 [2024-12-06 13:37:48.122023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:42:55.027 [2024-12-06 13:37:48.122034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.027 [2024-12-06 13:37:48.122099] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:55.027 [2024-12-06 13:37:48.123345] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:55.027 [2024-12-06 13:37:48.123382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.027 [2024-12-06 13:37:48.123394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:55.027 [2024-12-06 13:37:48.123426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:42:55.027 [2024-12-06 13:37:48.123436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.027 [2024-12-06 13:37:48.123545] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f4956239-c7fa-4ce2-938b-33b46e0a5db3 00:42:55.286 [2024-12-06 13:37:48.126240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.286 [2024-12-06 13:37:48.126281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:42:55.286 [2024-12-06 13:37:48.126295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:42:55.286 [2024-12-06 13:37:48.126310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.286 [2024-12-06 13:37:48.141273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.286 [2024-12-06 13:37:48.141469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:55.286 [2024-12-06 13:37:48.141494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.833 ms 00:42:55.286 [2024-12-06 13:37:48.141509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.286 [2024-12-06 13:37:48.141675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.286 [2024-12-06 13:37:48.141693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:55.286 [2024-12-06 13:37:48.141706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:42:55.286 [2024-12-06 13:37:48.141726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.286 [2024-12-06 13:37:48.141833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.286 [2024-12-06 13:37:48.141850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:55.286 [2024-12-06 13:37:48.141861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:55.286 [2024-12-06 13:37:48.141875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.286 [2024-12-06 13:37:48.141933] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:55.286 [2024-12-06 13:37:48.148122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.286 [2024-12-06 13:37:48.148155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:55.286 [2024-12-06 13:37:48.148173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.204 ms 00:42:55.286 [2024-12-06 13:37:48.148206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.286 [2024-12-06 13:37:48.148269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.286 [2024-12-06 13:37:48.148282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:55.286 [2024-12-06 13:37:48.148299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:42:55.286 [2024-12-06 13:37:48.148310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.286 [2024-12-06 13:37:48.148372] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:42:55.286 [2024-12-06 13:37:48.148571] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:55.286 [2024-12-06 13:37:48.148611] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:55.287 [2024-12-06 13:37:48.148626] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:55.287 [2024-12-06 13:37:48.148651] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:55.287 [2024-12-06 13:37:48.148665] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:55.287 [2024-12-06 13:37:48.148682] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:55.287 [2024-12-06 13:37:48.148693] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:55.287 [2024-12-06 13:37:48.148707] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:55.287 [2024-12-06 13:37:48.148724] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:55.287 [2024-12-06 13:37:48.148739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.287 [2024-12-06 13:37:48.148754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:55.287 [2024-12-06 13:37:48.148768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:42:55.287 [2024-12-06 13:37:48.148779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.287 [2024-12-06 13:37:48.148875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.287 [2024-12-06 13:37:48.148887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:55.287 [2024-12-06 13:37:48.148901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:42:55.287 [2024-12-06 13:37:48.148912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.287 [2024-12-06 13:37:48.149050] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:55.287 [2024-12-06 13:37:48.149063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:55.287 [2024-12-06 13:37:48.149081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:55.287 [2024-12-06 13:37:48.149092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:55.287 [2024-12-06 13:37:48.149106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:55.287 [2024-12-06 13:37:48.149116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:55.287 [2024-12-06 13:37:48.149129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:55.287 [2024-12-06 13:37:48.149139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:55.287 [2024-12-06 13:37:48.149154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:55.287 [2024-12-06 13:37:48.149163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:55.287 [2024-12-06 13:37:48.149176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:55.287 [2024-12-06 13:37:48.149186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:55.287 [2024-12-06 13:37:48.149203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:55.287 [2024-12-06 13:37:48.149213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:55.287 [2024-12-06 13:37:48.149226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:55.287 [2024-12-06 13:37:48.149235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:55.287 [2024-12-06 13:37:48.149253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:55.287 [2024-12-06 13:37:48.149263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:55.287 [2024-12-06 13:37:48.149279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:55.287 [2024-12-06 13:37:48.149289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:55.287 [2024-12-06 13:37:48.149304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:55.287 [2024-12-06 13:37:48.149314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:55.287 [2024-12-06 13:37:48.149329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:55.287 [2024-12-06 13:37:48.149340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:55.287 [2024-12-06 13:37:48.149355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:55.287 [2024-12-06 13:37:48.149365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:55.287 [2024-12-06 13:37:48.149380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:55.287 [2024-12-06 13:37:48.149390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:55.287 [2024-12-06 13:37:48.149609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:55.287 [2024-12-06 13:37:48.149650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:55.287 [2024-12-06 13:37:48.149690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:55.287 [2024-12-06 13:37:48.149722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:55.287 [2024-12-06 13:37:48.149860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:55.287 [2024-12-06 13:37:48.149920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:55.287 [2024-12-06 13:37:48.150026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:55.287 [2024-12-06 13:37:48.150061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:55.287 [2024-12-06 13:37:48.150095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:55.287 [2024-12-06 13:37:48.150177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:55.287 [2024-12-06 13:37:48.150217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:55.287 [2024-12-06 13:37:48.150247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:55.287 [2024-12-06 13:37:48.150333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:55.287 [2024-12-06 13:37:48.150368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:55.287 [2024-12-06 13:37:48.150411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:55.287 [2024-12-06 13:37:48.150494] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:55.287 [2024-12-06 13:37:48.150537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:55.287 [2024-12-06 13:37:48.150570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:55.287 [2024-12-06 13:37:48.150688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:55.287 [2024-12-06 13:37:48.150725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:55.287 [2024-12-06 13:37:48.150765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:55.287 [2024-12-06 13:37:48.150895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:55.287 [2024-12-06 13:37:48.150939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:55.287 [2024-12-06 13:37:48.150973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:55.287 [2024-12-06 13:37:48.151074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:55.287 [2024-12-06 13:37:48.151115] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:55.287 [2024-12-06 13:37:48.151176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:55.287 [2024-12-06 13:37:48.151292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:55.287 [2024-12-06 13:37:48.151311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:55.287 [2024-12-06 13:37:48.151324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:55.287 [2024-12-06 13:37:48.151339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:55.287 [2024-12-06 13:37:48.151351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:55.287 [2024-12-06 13:37:48.151368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:55.287 [2024-12-06 13:37:48.151380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:55.287 [2024-12-06 13:37:48.151411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:55.287 [2024-12-06 13:37:48.151424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:55.287 [2024-12-06 13:37:48.151443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:55.287 [2024-12-06 13:37:48.151455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:55.287 [2024-12-06 13:37:48.151470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:55.287 [2024-12-06 13:37:48.151481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:55.287 [2024-12-06 13:37:48.151497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:55.287 [2024-12-06 13:37:48.151509] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:55.287 [2024-12-06 13:37:48.151525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:55.287 [2024-12-06 13:37:48.151541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:55.287 [2024-12-06 13:37:48.151567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:55.287 [2024-12-06 13:37:48.151579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:55.287 [2024-12-06 13:37:48.151594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:55.287 [2024-12-06 13:37:48.151608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:55.287 [2024-12-06 13:37:48.151626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:55.287 [2024-12-06 13:37:48.151638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:42:55.287 [2024-12-06 13:37:48.151653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:55.287 [2024-12-06 13:37:48.151798] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:42:55.287 [2024-12-06 13:37:48.151822] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:42:59.472 [2024-12-06 13:37:51.671534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.671877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:42:59.472 [2024-12-06 13:37:51.671928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3519.713 ms 00:42:59.472 [2024-12-06 13:37:51.671943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.472 [2024-12-06 13:37:51.724098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.724180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:59.472 [2024-12-06 13:37:51.724200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.791 ms 00:42:59.472 [2024-12-06 13:37:51.724216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.472 [2024-12-06 13:37:51.724473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.724493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:59.472 [2024-12-06 13:37:51.724506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:42:59.472 [2024-12-06 13:37:51.724525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.472 [2024-12-06 13:37:51.802416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.802515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:59.472 [2024-12-06 13:37:51.802541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.814 ms 00:42:59.472 [2024-12-06 13:37:51.802558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.472 [2024-12-06 13:37:51.802644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.802661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:59.472 [2024-12-06 13:37:51.802674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:59.472 [2024-12-06 13:37:51.802690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.472 [2024-12-06 13:37:51.803697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.803726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:59.472 [2024-12-06 13:37:51.803741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:42:59.472 [2024-12-06 13:37:51.803761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.472 [2024-12-06 13:37:51.803940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.803960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:59.472 [2024-12-06 13:37:51.803974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:42:59.472 [2024-12-06 13:37:51.803994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.472 [2024-12-06 13:37:51.832898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.832973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:59.472 [2024-12-06 13:37:51.832991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.858 ms 00:42:59.472 [2024-12-06 13:37:51.833007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.472 [2024-12-06 13:37:51.849633] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:42:59.472 [2024-12-06 13:37:51.878738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.878816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:59.472 [2024-12-06 13:37:51.878844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.526 ms 00:42:59.472 [2024-12-06 13:37:51.878855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.472 [2024-12-06 13:37:51.955915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.955990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:42:59.472 [2024-12-06 13:37:51.956018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.968 ms 00:42:59.472 [2024-12-06 13:37:51.956030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.472 [2024-12-06 13:37:51.956285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.956299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:59.472 [2024-12-06 13:37:51.956320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:42:59.472 [2024-12-06 13:37:51.956331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.472 [2024-12-06 13:37:51.995063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.472 [2024-12-06 13:37:51.995119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:42:59.472 [2024-12-06 13:37:51.995155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.618 ms 00:42:59.472 [2024-12-06 13:37:51.995167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.473 [2024-12-06 13:37:52.031705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.473 [2024-12-06 13:37:52.031746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:42:59.473 [2024-12-06 13:37:52.031766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.472 ms 00:42:59.473 [2024-12-06 13:37:52.031776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.473 [2024-12-06 13:37:52.032577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.473 [2024-12-06 13:37:52.032598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:59.473 [2024-12-06 13:37:52.032614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:42:59.473 [2024-12-06 13:37:52.032625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.473 [2024-12-06 13:37:52.150261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.473 [2024-12-06 13:37:52.150338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:42:59.473 [2024-12-06 13:37:52.150367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 117.541 ms 00:42:59.473 [2024-12-06 13:37:52.150382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.473 [2024-12-06 13:37:52.191833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.473 [2024-12-06 13:37:52.191891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:42:59.473 [2024-12-06 13:37:52.191913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.293 ms 00:42:59.473 [2024-12-06 13:37:52.191925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.473 [2024-12-06 13:37:52.229837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.473 [2024-12-06 13:37:52.230023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:42:59.473 [2024-12-06 13:37:52.230149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.838 ms 00:42:59.473 [2024-12-06 13:37:52.230190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.473 [2024-12-06 13:37:52.267994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.473 [2024-12-06 13:37:52.268150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:59.473 [2024-12-06 13:37:52.268300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.721 ms 00:42:59.473 [2024-12-06 13:37:52.268319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.473 [2024-12-06 13:37:52.268423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.473 [2024-12-06 13:37:52.268438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:59.473 [2024-12-06 13:37:52.268462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:42:59.473 [2024-12-06 13:37:52.268473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.473 [2024-12-06 13:37:52.268628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:59.473 [2024-12-06 13:37:52.268643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:59.473 [2024-12-06 13:37:52.268658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:42:59.473 [2024-12-06 13:37:52.268669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:59.473 [2024-12-06 13:37:52.270339] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4148.241 ms, result 0 00:42:59.473 { 00:42:59.473 "name": "ftl0", 00:42:59.473 "uuid": "f4956239-c7fa-4ce2-938b-33b46e0a5db3" 00:42:59.473 } 00:42:59.473 13:37:52 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:42:59.473 13:37:52 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:42:59.473 13:37:52 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:59.473 13:37:52 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:42:59.473 13:37:52 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:59.473 13:37:52 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:59.473 13:37:52 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:59.473 13:37:52 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:42:59.731 [ 00:42:59.731 { 00:42:59.731 "name": "ftl0", 00:42:59.731 "aliases": [ 00:42:59.731 "f4956239-c7fa-4ce2-938b-33b46e0a5db3" 00:42:59.731 ], 00:42:59.731 "product_name": "FTL disk", 00:42:59.731 "block_size": 4096, 00:42:59.731 "num_blocks": 20971520, 00:42:59.731 "uuid": "f4956239-c7fa-4ce2-938b-33b46e0a5db3", 00:42:59.731 "assigned_rate_limits": { 00:42:59.731 "rw_ios_per_sec": 0, 00:42:59.731 "rw_mbytes_per_sec": 0, 00:42:59.731 "r_mbytes_per_sec": 0, 00:42:59.731 "w_mbytes_per_sec": 0 00:42:59.731 }, 00:42:59.731 "claimed": false, 00:42:59.731 "zoned": false, 00:42:59.731 "supported_io_types": { 00:42:59.731 "read": true, 00:42:59.731 "write": true, 00:42:59.731 "unmap": true, 00:42:59.731 "flush": true, 00:42:59.731 "reset": false, 00:42:59.731 "nvme_admin": false, 00:42:59.731 "nvme_io": false, 00:42:59.731 "nvme_io_md": false, 00:42:59.731 "write_zeroes": true, 00:42:59.731 "zcopy": false, 00:42:59.732 "get_zone_info": false, 00:42:59.732 "zone_management": false, 00:42:59.732 "zone_append": false, 00:42:59.732 "compare": false, 00:42:59.732 "compare_and_write": false, 00:42:59.732 "abort": false, 00:42:59.732 "seek_hole": false, 00:42:59.732 "seek_data": false, 00:42:59.732 "copy": false, 00:42:59.732 "nvme_iov_md": false 00:42:59.732 }, 00:42:59.732 "driver_specific": { 00:42:59.732 "ftl": { 00:42:59.732 "base_bdev": "b331eeb5-e10c-4e81-aa1d-c29a185c7807", 00:42:59.732 "cache": "nvc0n1p0" 00:42:59.732 } 00:42:59.732 } 00:42:59.732 } 00:42:59.732 ] 00:42:59.990 13:37:52 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:42:59.990 13:37:52 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:42:59.990 13:37:52 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:43:00.249 13:37:53 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:43:00.249 13:37:53 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:43:00.249 [2024-12-06 13:37:53.284134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.249 [2024-12-06 13:37:53.284205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:00.249 [2024-12-06 13:37:53.284224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:00.249 [2024-12-06 13:37:53.284239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.249 [2024-12-06 13:37:53.284295] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:00.249 [2024-12-06 13:37:53.289261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.249 [2024-12-06 13:37:53.289298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:00.249 [2024-12-06 13:37:53.289320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.936 ms 00:43:00.249 [2024-12-06 13:37:53.289331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.249 [2024-12-06 13:37:53.290088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.249 [2024-12-06 13:37:53.290108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:00.249 [2024-12-06 13:37:53.290124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:43:00.249 [2024-12-06 13:37:53.290135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.249 [2024-12-06 13:37:53.292764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.249 [2024-12-06 13:37:53.292793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:00.249 [2024-12-06 13:37:53.292808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:43:00.249 [2024-12-06 13:37:53.292820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.249 [2024-12-06 13:37:53.298023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.249 [2024-12-06 13:37:53.298055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:00.249 [2024-12-06 13:37:53.298071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.151 ms 00:43:00.249 [2024-12-06 13:37:53.298081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.249 [2024-12-06 13:37:53.337690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.249 [2024-12-06 13:37:53.337730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:00.249 [2024-12-06 13:37:53.337768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.486 ms 00:43:00.249 [2024-12-06 13:37:53.337778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.509 [2024-12-06 13:37:53.360735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.509 [2024-12-06 13:37:53.360798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:00.509 [2024-12-06 13:37:53.360823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.890 ms 00:43:00.509 [2024-12-06 13:37:53.360835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.509 [2024-12-06 13:37:53.361107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.509 [2024-12-06 13:37:53.361122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:00.509 [2024-12-06 13:37:53.361137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:43:00.509 [2024-12-06 13:37:53.361158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.509 [2024-12-06 13:37:53.398885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.509 [2024-12-06 13:37:53.399063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:00.509 [2024-12-06 13:37:53.399091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.685 ms 00:43:00.509 [2024-12-06 13:37:53.399102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.509 [2024-12-06 13:37:53.435777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.509 [2024-12-06 13:37:53.435816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:00.509 [2024-12-06 13:37:53.435834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.588 ms 00:43:00.509 [2024-12-06 13:37:53.435844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.509 [2024-12-06 13:37:53.472589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.509 [2024-12-06 13:37:53.472651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:00.509 [2024-12-06 13:37:53.472687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.677 ms 00:43:00.509 [2024-12-06 13:37:53.472697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.509 [2024-12-06 13:37:53.509514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.509 [2024-12-06 13:37:53.509683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:00.509 [2024-12-06 13:37:53.509710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.650 ms 00:43:00.509 [2024-12-06 13:37:53.509722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.509 [2024-12-06 13:37:53.509784] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:00.509 [2024-12-06 13:37:53.509810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.509990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:00.509 [2024-12-06 13:37:53.510535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.510997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:00.510 [2024-12-06 13:37:53.511206] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:00.510 [2024-12-06 13:37:53.511220] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f4956239-c7fa-4ce2-938b-33b46e0a5db3 00:43:00.510 [2024-12-06 13:37:53.511232] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:00.510 [2024-12-06 13:37:53.511249] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:00.510 [2024-12-06 13:37:53.511259] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:00.510 [2024-12-06 13:37:53.511278] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:00.510 [2024-12-06 13:37:53.511289] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:00.510 [2024-12-06 13:37:53.511304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:00.510 [2024-12-06 13:37:53.511315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:00.510 [2024-12-06 13:37:53.511327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:00.510 [2024-12-06 13:37:53.511336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:00.510 [2024-12-06 13:37:53.511350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.510 [2024-12-06 13:37:53.511361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:00.510 [2024-12-06 13:37:53.511375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.568 ms 00:43:00.510 [2024-12-06 13:37:53.511385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.510 [2024-12-06 13:37:53.533244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.510 [2024-12-06 13:37:53.533417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:00.510 [2024-12-06 13:37:53.533443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.769 ms 00:43:00.510 [2024-12-06 13:37:53.533455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.510 [2024-12-06 13:37:53.534110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.510 [2024-12-06 13:37:53.534125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:00.510 [2024-12-06 13:37:53.534140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:43:00.510 [2024-12-06 13:37:53.534151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.770 [2024-12-06 13:37:53.612516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:00.770 [2024-12-06 13:37:53.612597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:00.770 [2024-12-06 13:37:53.612636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:00.770 [2024-12-06 13:37:53.612669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.770 [2024-12-06 13:37:53.612784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:00.770 [2024-12-06 13:37:53.612797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:00.770 [2024-12-06 13:37:53.612812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:00.770 [2024-12-06 13:37:53.612823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.770 [2024-12-06 13:37:53.613009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:00.770 [2024-12-06 13:37:53.613029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:00.770 [2024-12-06 13:37:53.613044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:00.770 [2024-12-06 13:37:53.613055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.770 [2024-12-06 13:37:53.613104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:00.770 [2024-12-06 13:37:53.613116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:00.770 [2024-12-06 13:37:53.613131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:00.770 [2024-12-06 13:37:53.613142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.770 [2024-12-06 13:37:53.763663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:00.770 [2024-12-06 13:37:53.763745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:00.770 [2024-12-06 13:37:53.763766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:00.770 [2024-12-06 13:37:53.763778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.029 [2024-12-06 13:37:53.876027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:01.029 [2024-12-06 13:37:53.876114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:01.029 [2024-12-06 13:37:53.876136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:01.029 [2024-12-06 13:37:53.876149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.029 [2024-12-06 13:37:53.876322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:01.029 [2024-12-06 13:37:53.876336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:01.029 [2024-12-06 13:37:53.876356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:01.029 [2024-12-06 13:37:53.876367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.029 [2024-12-06 13:37:53.876530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:01.029 [2024-12-06 13:37:53.876546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:01.029 [2024-12-06 13:37:53.876562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:01.029 [2024-12-06 13:37:53.876573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.029 [2024-12-06 13:37:53.876770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:01.029 [2024-12-06 13:37:53.876784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:01.029 [2024-12-06 13:37:53.876800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:01.029 [2024-12-06 13:37:53.876814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.029 [2024-12-06 13:37:53.876900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:01.029 [2024-12-06 13:37:53.876913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:01.029 [2024-12-06 13:37:53.876928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:01.029 [2024-12-06 13:37:53.876938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.029 [2024-12-06 13:37:53.877015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:01.029 [2024-12-06 13:37:53.877027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:01.029 [2024-12-06 13:37:53.877041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:01.029 [2024-12-06 13:37:53.877055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.029 [2024-12-06 13:37:53.877137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:01.029 [2024-12-06 13:37:53.877150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:01.029 [2024-12-06 13:37:53.877166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:01.029 [2024-12-06 13:37:53.877176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.029 [2024-12-06 13:37:53.877437] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.247 ms, result 0 00:43:01.029 true 00:43:01.029 13:37:53 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77867 00:43:01.029 13:37:53 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77867 ']' 00:43:01.029 13:37:53 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77867 00:43:01.029 13:37:53 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:43:01.029 13:37:53 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:01.029 13:37:53 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77867 00:43:01.029 killing process with pid 77867 00:43:01.029 13:37:53 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:01.029 13:37:53 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:01.029 13:37:53 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77867' 00:43:01.029 13:37:53 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77867 00:43:01.029 13:37:53 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77867 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:06.304 13:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:43:06.564 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:43:06.564 fio-3.35 00:43:06.564 Starting 1 thread 00:43:13.135 00:43:13.135 test: (groupid=0, jobs=1): err= 0: pid=78086: Fri Dec 6 13:38:05 2024 00:43:13.135 read: IOPS=950, BW=63.1MiB/s (66.2MB/s)(255MiB/4034msec) 00:43:13.135 slat (usec): min=4, max=164, avg= 8.50, stdev= 4.66 00:43:13.135 clat (usec): min=298, max=1131, avg=466.18, stdev=64.43 00:43:13.135 lat (usec): min=305, max=1137, avg=474.69, stdev=65.02 00:43:13.135 clat percentiles (usec): 00:43:13.135 | 1.00th=[ 338], 5.00th=[ 359], 10.00th=[ 400], 20.00th=[ 416], 00:43:13.135 | 30.00th=[ 424], 40.00th=[ 445], 50.00th=[ 474], 60.00th=[ 486], 00:43:13.135 | 70.00th=[ 494], 80.00th=[ 510], 90.00th=[ 545], 95.00th=[ 570], 00:43:13.135 | 99.00th=[ 644], 99.50th=[ 676], 99.90th=[ 832], 99.95th=[ 889], 00:43:13.135 | 99.99th=[ 1139] 00:43:13.135 write: IOPS=956, BW=63.5MiB/s (66.6MB/s)(256MiB/4030msec); 0 zone resets 00:43:13.135 slat (nsec): min=16899, max=95876, avg=22890.12, stdev=6024.37 00:43:13.135 clat (usec): min=354, max=1046, avg=539.54, stdev=69.08 00:43:13.135 lat (usec): min=378, max=1066, avg=562.43, stdev=69.66 00:43:13.135 clat percentiles (usec): 00:43:13.135 | 1.00th=[ 404], 5.00th=[ 437], 10.00th=[ 453], 20.00th=[ 490], 00:43:13.135 | 30.00th=[ 506], 40.00th=[ 519], 50.00th=[ 529], 60.00th=[ 562], 00:43:13.135 | 70.00th=[ 578], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 644], 00:43:13.135 | 99.00th=[ 783], 99.50th=[ 824], 99.90th=[ 930], 99.95th=[ 963], 00:43:13.135 | 99.99th=[ 1045] 00:43:13.135 bw ( KiB/s): min=61064, max=68136, per=100.00%, avg=65127.00, stdev=2506.07, samples=8 00:43:13.135 iops : min= 898, max= 1002, avg=957.75, stdev=36.85, samples=8 00:43:13.135 lat (usec) : 500=50.28%, 750=48.94%, 1000=0.75% 00:43:13.135 lat (msec) : 2=0.03% 00:43:13.135 cpu : usr=98.98%, sys=0.25%, ctx=11, majf=0, minf=1169 00:43:13.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:13.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.135 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:13.135 00:43:13.135 Run status group 0 (all jobs): 00:43:13.135 READ: bw=63.1MiB/s (66.2MB/s), 63.1MiB/s-63.1MiB/s (66.2MB/s-66.2MB/s), io=255MiB (267MB), run=4034-4034msec 00:43:13.135 WRITE: bw=63.5MiB/s (66.6MB/s), 63.5MiB/s-63.5MiB/s (66.6MB/s-66.6MB/s), io=256MiB (269MB), run=4030-4030msec 00:43:14.514 ----------------------------------------------------- 00:43:14.514 Suppressions used: 00:43:14.514 count bytes template 00:43:14.514 1 5 /usr/src/fio/parse.c 00:43:14.514 1 8 libtcmalloc_minimal.so 00:43:14.514 1 904 libcrypto.so 00:43:14.514 ----------------------------------------------------- 00:43:14.514 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:14.514 13:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:43:14.514 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:43:14.514 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:43:14.514 fio-3.35 00:43:14.514 Starting 2 threads 00:43:46.604 00:43:46.604 first_half: (groupid=0, jobs=1): err= 0: pid=78195: Fri Dec 6 13:38:34 2024 00:43:46.604 read: IOPS=2561, BW=10.0MiB/s (10.5MB/s)(255MiB/25472msec) 00:43:46.604 slat (usec): min=3, max=103, avg=11.33, stdev= 4.63 00:43:46.604 clat (usec): min=662, max=292327, avg=38745.35, stdev=21069.02 00:43:46.604 lat (usec): min=672, max=292333, avg=38756.67, stdev=21069.66 00:43:46.604 clat percentiles (msec): 00:43:46.604 | 1.00th=[ 11], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:46.604 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:43:46.604 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 63], 00:43:46.604 | 99.00th=[ 163], 99.50th=[ 184], 99.90th=[ 213], 99.95th=[ 218], 00:43:46.604 | 99.99th=[ 275] 00:43:46.604 write: IOPS=3075, BW=12.0MiB/s (12.6MB/s)(256MiB/21310msec); 0 zone resets 00:43:46.604 slat (usec): min=4, max=475, avg=12.12, stdev= 7.27 00:43:46.604 clat (usec): min=415, max=104038, avg=11129.70, stdev=18902.67 00:43:46.604 lat (usec): min=428, max=104056, avg=11141.82, stdev=18902.92 00:43:46.604 clat percentiles (usec): 00:43:46.604 | 1.00th=[ 955], 5.00th=[ 1188], 10.00th=[ 1352], 20.00th=[ 1647], 00:43:46.604 | 30.00th=[ 2376], 40.00th=[ 4293], 50.00th=[ 5997], 60.00th=[ 7177], 00:43:46.604 | 70.00th=[ 8225], 80.00th=[ 12518], 90.00th=[ 15270], 95.00th=[ 77071], 00:43:46.604 | 99.00th=[ 87557], 99.50th=[ 89654], 99.90th=[ 96994], 99.95th=[ 99091], 00:43:46.604 | 99.99th=[102237] 00:43:46.604 bw ( KiB/s): min= 256, max=39784, per=97.32%, avg=21845.33, stdev=13205.60, samples=24 00:43:46.604 iops : min= 64, max= 9946, avg=5461.33, stdev=3301.40, samples=24 00:43:46.604 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.65% 00:43:46.604 lat (msec) : 2=12.89%, 4=5.72%, 10=19.07%, 20=8.32%, 50=46.31% 00:43:46.604 lat (msec) : 100=5.63%, 250=1.31%, 500=0.01% 00:43:46.604 cpu : usr=98.98%, sys=0.37%, ctx=49, majf=0, minf=5567 00:43:46.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:43:46.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.604 complete : 0=0.0%, 4=99.5%, 8=0.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:46.604 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:46.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:46.604 second_half: (groupid=0, jobs=1): err= 0: pid=78196: Fri Dec 6 13:38:34 2024 00:43:46.604 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(255MiB/25641msec) 00:43:46.604 slat (nsec): min=3583, max=92258, avg=7591.63, stdev=3293.38 00:43:46.604 clat (usec): min=966, max=299364, avg=37966.73, stdev=23453.42 00:43:46.604 lat (usec): min=972, max=299370, avg=37974.32, stdev=23453.94 00:43:46.604 clat percentiles (msec): 00:43:46.604 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:43:46.604 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:43:46.604 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 52], 00:43:46.604 | 99.00th=[ 176], 99.50th=[ 199], 99.90th=[ 236], 99.95th=[ 262], 00:43:46.604 | 99.99th=[ 292] 00:43:46.604 write: IOPS=2805, BW=11.0MiB/s (11.5MB/s)(256MiB/23356msec); 0 zone resets 00:43:46.604 slat (usec): min=4, max=534, avg=10.02, stdev= 5.88 00:43:46.604 clat (usec): min=404, max=104937, avg=12209.92, stdev=20312.00 00:43:46.604 lat (usec): min=415, max=104959, avg=12219.95, stdev=20312.58 00:43:46.604 clat percentiles (usec): 00:43:46.604 | 1.00th=[ 947], 5.00th=[ 1221], 10.00th=[ 1385], 20.00th=[ 1663], 00:43:46.604 | 30.00th=[ 2147], 40.00th=[ 3884], 50.00th=[ 5604], 60.00th=[ 7111], 00:43:46.604 | 70.00th=[ 8455], 80.00th=[ 13042], 90.00th=[ 34341], 95.00th=[ 79168], 00:43:46.604 | 99.00th=[ 87557], 99.50th=[ 90702], 99.90th=[ 99091], 99.95th=[101188], 00:43:46.604 | 99.99th=[104334] 00:43:46.604 bw ( KiB/s): min= 144, max=52600, per=89.83%, avg=20164.85, stdev=16505.08, samples=26 00:43:46.604 iops : min= 36, max=13150, avg=5041.19, stdev=4126.29, samples=26 00:43:46.604 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.69% 00:43:46.604 lat (msec) : 2=13.58%, 4=6.35%, 10=18.46%, 20=6.95%, 50=47.93% 00:43:46.604 lat (msec) : 100=4.43%, 250=1.48%, 500=0.04% 00:43:46.604 cpu : usr=99.01%, sys=0.42%, ctx=33, majf=0, minf=5540 00:43:46.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:43:46.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.604 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:46.604 issued rwts: total=65320,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:46.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:46.604 00:43:46.604 Run status group 0 (all jobs): 00:43:46.604 READ: bw=19.9MiB/s (20.9MB/s), 9.95MiB/s-10.0MiB/s (10.4MB/s-10.5MB/s), io=510MiB (535MB), run=25472-25641msec 00:43:46.604 WRITE: bw=21.9MiB/s (23.0MB/s), 11.0MiB/s-12.0MiB/s (11.5MB/s-12.6MB/s), io=512MiB (537MB), run=21310-23356msec 00:43:46.604 ----------------------------------------------------- 00:43:46.604 Suppressions used: 00:43:46.604 count bytes template 00:43:46.604 2 10 /usr/src/fio/parse.c 00:43:46.604 2 192 /usr/src/fio/iolog.c 00:43:46.604 1 8 libtcmalloc_minimal.so 00:43:46.604 1 904 libcrypto.so 00:43:46.604 ----------------------------------------------------- 00:43:46.604 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:46.604 13:38:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:43:46.604 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:43:46.604 fio-3.35 00:43:46.604 Starting 1 thread 00:44:01.493 00:44:01.493 test: (groupid=0, jobs=1): err= 0: pid=78521: Fri Dec 6 13:38:52 2024 00:44:01.493 read: IOPS=7237, BW=28.3MiB/s (29.6MB/s)(255MiB/9009msec) 00:44:01.493 slat (nsec): min=3600, max=93239, avg=8919.08, stdev=4050.42 00:44:01.493 clat (usec): min=874, max=35081, avg=17671.14, stdev=871.83 00:44:01.493 lat (usec): min=878, max=35093, avg=17680.05, stdev=871.69 00:44:01.493 clat percentiles (usec): 00:44:01.493 | 1.00th=[16712], 5.00th=[16909], 10.00th=[16909], 20.00th=[17171], 00:44:01.493 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17695], 00:44:01.493 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18482], 95.00th=[18744], 00:44:01.493 | 99.00th=[20055], 99.50th=[20317], 99.90th=[26346], 99.95th=[30540], 00:44:01.493 | 99.99th=[34341] 00:44:01.493 write: IOPS=12.7k, BW=49.7MiB/s (52.2MB/s)(256MiB/5146msec); 0 zone resets 00:44:01.493 slat (usec): min=4, max=654, avg=11.57, stdev= 7.50 00:44:01.493 clat (usec): min=587, max=53957, avg=9988.67, stdev=11776.17 00:44:01.493 lat (usec): min=597, max=53972, avg=10000.24, stdev=11776.10 00:44:01.493 clat percentiles (usec): 00:44:01.493 | 1.00th=[ 857], 5.00th=[ 996], 10.00th=[ 1090], 20.00th=[ 1237], 00:44:01.493 | 30.00th=[ 1418], 40.00th=[ 1827], 50.00th=[ 7308], 60.00th=[ 8291], 00:44:01.493 | 70.00th=[ 9634], 80.00th=[11469], 90.00th=[34866], 95.00th=[36963], 00:44:01.493 | 99.00th=[39060], 99.50th=[40109], 99.90th=[42730], 99.95th=[44303], 00:44:01.493 | 99.99th=[51119] 00:44:01.493 bw ( KiB/s): min=12744, max=64296, per=93.55%, avg=47654.36, stdev=13195.80, samples=11 00:44:01.493 iops : min= 3186, max=16074, avg=11913.55, stdev=3298.96, samples=11 00:44:01.493 lat (usec) : 750=0.09%, 1000=2.48% 00:44:01.493 lat (msec) : 2=17.85%, 4=0.59%, 10=15.07%, 20=55.47%, 50=8.44% 00:44:01.493 lat (msec) : 100=0.01% 00:44:01.493 cpu : usr=98.44%, sys=0.66%, ctx=20, majf=0, minf=5565 00:44:01.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:44:01.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:01.493 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:01.493 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:01.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:01.493 00:44:01.493 Run status group 0 (all jobs): 00:44:01.493 READ: bw=28.3MiB/s (29.6MB/s), 28.3MiB/s-28.3MiB/s (29.6MB/s-29.6MB/s), io=255MiB (267MB), run=9009-9009msec 00:44:01.493 WRITE: bw=49.7MiB/s (52.2MB/s), 49.7MiB/s-49.7MiB/s (52.2MB/s-52.2MB/s), io=256MiB (268MB), run=5146-5146msec 00:44:02.061 ----------------------------------------------------- 00:44:02.061 Suppressions used: 00:44:02.061 count bytes template 00:44:02.061 1 5 /usr/src/fio/parse.c 00:44:02.061 2 192 /usr/src/fio/iolog.c 00:44:02.061 1 8 libtcmalloc_minimal.so 00:44:02.061 1 904 libcrypto.so 00:44:02.061 ----------------------------------------------------- 00:44:02.061 00:44:02.061 13:38:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:44:02.061 13:38:55 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:02.061 13:38:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:44:02.062 13:38:55 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:02.062 Remove shared memory files 00:44:02.062 13:38:55 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:44:02.062 13:38:55 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:44:02.062 13:38:55 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:44:02.062 13:38:55 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:44:02.062 13:38:55 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58150 /dev/shm/spdk_tgt_trace.pid76768 00:44:02.062 13:38:55 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:44:02.062 13:38:55 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:44:02.062 ************************************ 00:44:02.062 END TEST ftl_fio_basic 00:44:02.062 ************************************ 00:44:02.062 00:44:02.062 real 1m12.031s 00:44:02.062 user 2m34.244s 00:44:02.062 sys 0m4.628s 00:44:02.062 13:38:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:02.062 13:38:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:44:02.320 13:38:55 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:44:02.320 13:38:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:44:02.320 13:38:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:02.320 13:38:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:44:02.320 ************************************ 00:44:02.320 START TEST ftl_bdevperf 00:44:02.320 ************************************ 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:44:02.320 * Looking for test storage... 00:44:02.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:44:02.320 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:02.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.579 --rc genhtml_branch_coverage=1 00:44:02.579 --rc genhtml_function_coverage=1 00:44:02.579 --rc genhtml_legend=1 00:44:02.579 --rc geninfo_all_blocks=1 00:44:02.579 --rc geninfo_unexecuted_blocks=1 00:44:02.579 00:44:02.579 ' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:02.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.579 --rc genhtml_branch_coverage=1 00:44:02.579 --rc genhtml_function_coverage=1 00:44:02.579 --rc genhtml_legend=1 00:44:02.579 --rc geninfo_all_blocks=1 00:44:02.579 --rc geninfo_unexecuted_blocks=1 00:44:02.579 00:44:02.579 ' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:02.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.579 --rc genhtml_branch_coverage=1 00:44:02.579 --rc genhtml_function_coverage=1 00:44:02.579 --rc genhtml_legend=1 00:44:02.579 --rc geninfo_all_blocks=1 00:44:02.579 --rc geninfo_unexecuted_blocks=1 00:44:02.579 00:44:02.579 ' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:02.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.579 --rc genhtml_branch_coverage=1 00:44:02.579 --rc genhtml_function_coverage=1 00:44:02.579 --rc genhtml_legend=1 00:44:02.579 --rc geninfo_all_blocks=1 00:44:02.579 --rc geninfo_unexecuted_blocks=1 00:44:02.579 00:44:02.579 ' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78765 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78765 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78765 ']' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:02.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:02.579 13:38:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:02.579 [2024-12-06 13:38:55.574740] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:44:02.579 [2024-12-06 13:38:55.575146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78765 ] 00:44:02.838 [2024-12-06 13:38:55.768274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:02.838 [2024-12-06 13:38:55.910191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:03.407 13:38:56 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:03.407 13:38:56 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:44:03.407 13:38:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:44:03.407 13:38:56 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:44:03.407 13:38:56 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:44:03.407 13:38:56 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:44:03.407 13:38:56 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:44:03.407 13:38:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:44:03.666 13:38:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:44:03.666 13:38:56 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:44:03.666 13:38:56 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:44:03.666 13:38:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:44:03.666 13:38:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:03.666 13:38:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:44:03.666 13:38:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:44:03.666 13:38:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:44:03.925 13:38:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:03.925 { 00:44:03.925 "name": "nvme0n1", 00:44:03.925 "aliases": [ 00:44:03.925 "87db416e-0c4d-4f31-a1cd-aa4251447bd8" 00:44:03.925 ], 00:44:03.925 "product_name": "NVMe disk", 00:44:03.925 "block_size": 4096, 00:44:03.925 "num_blocks": 1310720, 00:44:03.925 "uuid": "87db416e-0c4d-4f31-a1cd-aa4251447bd8", 00:44:03.925 "numa_id": -1, 00:44:03.925 "assigned_rate_limits": { 00:44:03.925 "rw_ios_per_sec": 0, 00:44:03.925 "rw_mbytes_per_sec": 0, 00:44:03.925 "r_mbytes_per_sec": 0, 00:44:03.925 "w_mbytes_per_sec": 0 00:44:03.925 }, 00:44:03.925 "claimed": true, 00:44:03.925 "claim_type": "read_many_write_one", 00:44:03.925 "zoned": false, 00:44:03.925 "supported_io_types": { 00:44:03.925 "read": true, 00:44:03.925 "write": true, 00:44:03.925 "unmap": true, 00:44:03.925 "flush": true, 00:44:03.925 "reset": true, 00:44:03.925 "nvme_admin": true, 00:44:03.925 "nvme_io": true, 00:44:03.925 "nvme_io_md": false, 00:44:03.925 "write_zeroes": true, 00:44:03.925 "zcopy": false, 00:44:03.925 "get_zone_info": false, 00:44:03.925 "zone_management": false, 00:44:03.925 "zone_append": false, 00:44:03.925 "compare": true, 00:44:03.925 "compare_and_write": false, 00:44:03.925 "abort": true, 00:44:03.925 "seek_hole": false, 00:44:03.925 "seek_data": false, 00:44:03.925 "copy": true, 00:44:03.925 "nvme_iov_md": false 00:44:03.925 }, 00:44:03.925 "driver_specific": { 00:44:03.925 "nvme": [ 00:44:03.925 { 00:44:03.925 "pci_address": "0000:00:11.0", 00:44:03.925 "trid": { 00:44:03.925 "trtype": "PCIe", 00:44:03.925 "traddr": "0000:00:11.0" 00:44:03.925 }, 00:44:03.925 "ctrlr_data": { 00:44:03.925 "cntlid": 0, 00:44:03.925 "vendor_id": "0x1b36", 00:44:03.925 "model_number": "QEMU NVMe Ctrl", 00:44:03.925 "serial_number": "12341", 00:44:03.925 "firmware_revision": "8.0.0", 00:44:03.925 "subnqn": "nqn.2019-08.org.qemu:12341", 00:44:03.925 "oacs": { 00:44:03.925 "security": 0, 00:44:03.925 "format": 1, 00:44:03.925 "firmware": 0, 00:44:03.925 "ns_manage": 1 00:44:03.925 }, 00:44:03.925 "multi_ctrlr": false, 00:44:03.925 "ana_reporting": false 00:44:03.925 }, 00:44:03.925 "vs": { 00:44:03.925 "nvme_version": "1.4" 00:44:03.925 }, 00:44:03.925 "ns_data": { 00:44:03.925 "id": 1, 00:44:03.925 "can_share": false 00:44:03.925 } 00:44:03.925 } 00:44:03.925 ], 00:44:03.925 "mp_policy": "active_passive" 00:44:03.925 } 00:44:03.925 } 00:44:03.925 ]' 00:44:03.925 13:38:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:03.925 13:38:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:44:03.925 13:38:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:03.925 13:38:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:44:03.926 13:38:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:44:03.926 13:38:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:44:03.926 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:44:03.926 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:44:03.926 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:44:03.926 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:44:03.926 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:44:04.185 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=7d8d6e6d-2e85-4232-85f8-4f8665002e36 00:44:04.185 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:44:04.185 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d8d6e6d-2e85-4232-85f8-4f8665002e36 00:44:04.444 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:44:04.705 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=a3a6000c-4c3e-4f47-9957-ecd58fadd3c5 00:44:04.705 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a3a6000c-4c3e-4f47-9957-ecd58fadd3c5 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=6eec88a8-ccff-4877-837b-ec156ab95549 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6eec88a8-ccff-4877-837b-ec156ab95549 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=6eec88a8-ccff-4877-837b-ec156ab95549 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 6eec88a8-ccff-4877-837b-ec156ab95549 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6eec88a8-ccff-4877-837b-ec156ab95549 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:44:04.965 13:38:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6eec88a8-ccff-4877-837b-ec156ab95549 00:44:05.224 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:05.224 { 00:44:05.224 "name": "6eec88a8-ccff-4877-837b-ec156ab95549", 00:44:05.224 "aliases": [ 00:44:05.224 "lvs/nvme0n1p0" 00:44:05.224 ], 00:44:05.224 "product_name": "Logical Volume", 00:44:05.224 "block_size": 4096, 00:44:05.224 "num_blocks": 26476544, 00:44:05.224 "uuid": "6eec88a8-ccff-4877-837b-ec156ab95549", 00:44:05.224 "assigned_rate_limits": { 00:44:05.224 "rw_ios_per_sec": 0, 00:44:05.224 "rw_mbytes_per_sec": 0, 00:44:05.224 "r_mbytes_per_sec": 0, 00:44:05.224 "w_mbytes_per_sec": 0 00:44:05.224 }, 00:44:05.224 "claimed": false, 00:44:05.224 "zoned": false, 00:44:05.224 "supported_io_types": { 00:44:05.224 "read": true, 00:44:05.224 "write": true, 00:44:05.224 "unmap": true, 00:44:05.224 "flush": false, 00:44:05.224 "reset": true, 00:44:05.224 "nvme_admin": false, 00:44:05.224 "nvme_io": false, 00:44:05.224 "nvme_io_md": false, 00:44:05.224 "write_zeroes": true, 00:44:05.225 "zcopy": false, 00:44:05.225 "get_zone_info": false, 00:44:05.225 "zone_management": false, 00:44:05.225 "zone_append": false, 00:44:05.225 "compare": false, 00:44:05.225 "compare_and_write": false, 00:44:05.225 "abort": false, 00:44:05.225 "seek_hole": true, 00:44:05.225 "seek_data": true, 00:44:05.225 "copy": false, 00:44:05.225 "nvme_iov_md": false 00:44:05.225 }, 00:44:05.225 "driver_specific": { 00:44:05.225 "lvol": { 00:44:05.225 "lvol_store_uuid": "a3a6000c-4c3e-4f47-9957-ecd58fadd3c5", 00:44:05.225 "base_bdev": "nvme0n1", 00:44:05.225 "thin_provision": true, 00:44:05.225 "num_allocated_clusters": 0, 00:44:05.225 "snapshot": false, 00:44:05.225 "clone": false, 00:44:05.225 "esnap_clone": false 00:44:05.225 } 00:44:05.225 } 00:44:05.225 } 00:44:05.225 ]' 00:44:05.225 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:05.225 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:44:05.225 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 6eec88a8-ccff-4877-837b-ec156ab95549 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6eec88a8-ccff-4877-837b-ec156ab95549 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:44:05.484 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6eec88a8-ccff-4877-837b-ec156ab95549 00:44:06.051 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:06.051 { 00:44:06.051 "name": "6eec88a8-ccff-4877-837b-ec156ab95549", 00:44:06.051 "aliases": [ 00:44:06.051 "lvs/nvme0n1p0" 00:44:06.051 ], 00:44:06.051 "product_name": "Logical Volume", 00:44:06.051 "block_size": 4096, 00:44:06.051 "num_blocks": 26476544, 00:44:06.051 "uuid": "6eec88a8-ccff-4877-837b-ec156ab95549", 00:44:06.051 "assigned_rate_limits": { 00:44:06.051 "rw_ios_per_sec": 0, 00:44:06.051 "rw_mbytes_per_sec": 0, 00:44:06.051 "r_mbytes_per_sec": 0, 00:44:06.051 "w_mbytes_per_sec": 0 00:44:06.051 }, 00:44:06.051 "claimed": false, 00:44:06.051 "zoned": false, 00:44:06.051 "supported_io_types": { 00:44:06.051 "read": true, 00:44:06.051 "write": true, 00:44:06.051 "unmap": true, 00:44:06.051 "flush": false, 00:44:06.051 "reset": true, 00:44:06.051 "nvme_admin": false, 00:44:06.051 "nvme_io": false, 00:44:06.051 "nvme_io_md": false, 00:44:06.051 "write_zeroes": true, 00:44:06.051 "zcopy": false, 00:44:06.051 "get_zone_info": false, 00:44:06.051 "zone_management": false, 00:44:06.051 "zone_append": false, 00:44:06.051 "compare": false, 00:44:06.051 "compare_and_write": false, 00:44:06.051 "abort": false, 00:44:06.051 "seek_hole": true, 00:44:06.051 "seek_data": true, 00:44:06.051 "copy": false, 00:44:06.051 "nvme_iov_md": false 00:44:06.051 }, 00:44:06.051 "driver_specific": { 00:44:06.051 "lvol": { 00:44:06.051 "lvol_store_uuid": "a3a6000c-4c3e-4f47-9957-ecd58fadd3c5", 00:44:06.051 "base_bdev": "nvme0n1", 00:44:06.051 "thin_provision": true, 00:44:06.051 "num_allocated_clusters": 0, 00:44:06.051 "snapshot": false, 00:44:06.051 "clone": false, 00:44:06.051 "esnap_clone": false 00:44:06.051 } 00:44:06.051 } 00:44:06.051 } 00:44:06.051 ]' 00:44:06.051 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:06.051 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:44:06.051 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:06.051 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:06.051 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:06.051 13:38:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:44:06.051 13:38:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:44:06.051 13:38:58 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:44:06.051 13:38:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:44:06.051 13:38:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 6eec88a8-ccff-4877-837b-ec156ab95549 00:44:06.052 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=6eec88a8-ccff-4877-837b-ec156ab95549 00:44:06.052 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:06.052 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:44:06.052 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:44:06.052 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6eec88a8-ccff-4877-837b-ec156ab95549 00:44:06.309 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:06.309 { 00:44:06.309 "name": "6eec88a8-ccff-4877-837b-ec156ab95549", 00:44:06.309 "aliases": [ 00:44:06.309 "lvs/nvme0n1p0" 00:44:06.310 ], 00:44:06.310 "product_name": "Logical Volume", 00:44:06.310 "block_size": 4096, 00:44:06.310 "num_blocks": 26476544, 00:44:06.310 "uuid": "6eec88a8-ccff-4877-837b-ec156ab95549", 00:44:06.310 "assigned_rate_limits": { 00:44:06.310 "rw_ios_per_sec": 0, 00:44:06.310 "rw_mbytes_per_sec": 0, 00:44:06.310 "r_mbytes_per_sec": 0, 00:44:06.310 "w_mbytes_per_sec": 0 00:44:06.310 }, 00:44:06.310 "claimed": false, 00:44:06.310 "zoned": false, 00:44:06.310 "supported_io_types": { 00:44:06.310 "read": true, 00:44:06.310 "write": true, 00:44:06.310 "unmap": true, 00:44:06.310 "flush": false, 00:44:06.310 "reset": true, 00:44:06.310 "nvme_admin": false, 00:44:06.310 "nvme_io": false, 00:44:06.310 "nvme_io_md": false, 00:44:06.310 "write_zeroes": true, 00:44:06.310 "zcopy": false, 00:44:06.310 "get_zone_info": false, 00:44:06.310 "zone_management": false, 00:44:06.310 "zone_append": false, 00:44:06.310 "compare": false, 00:44:06.310 "compare_and_write": false, 00:44:06.310 "abort": false, 00:44:06.310 "seek_hole": true, 00:44:06.310 "seek_data": true, 00:44:06.310 "copy": false, 00:44:06.310 "nvme_iov_md": false 00:44:06.310 }, 00:44:06.310 "driver_specific": { 00:44:06.310 "lvol": { 00:44:06.310 "lvol_store_uuid": "a3a6000c-4c3e-4f47-9957-ecd58fadd3c5", 00:44:06.310 "base_bdev": "nvme0n1", 00:44:06.310 "thin_provision": true, 00:44:06.310 "num_allocated_clusters": 0, 00:44:06.310 "snapshot": false, 00:44:06.310 "clone": false, 00:44:06.310 "esnap_clone": false 00:44:06.310 } 00:44:06.310 } 00:44:06.310 } 00:44:06.310 ]' 00:44:06.310 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:06.310 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:44:06.310 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:06.310 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:06.310 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:06.310 13:38:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:44:06.310 13:38:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:44:06.310 13:38:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6eec88a8-ccff-4877-837b-ec156ab95549 -c nvc0n1p0 --l2p_dram_limit 20 00:44:06.568 [2024-12-06 13:38:59.641616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.568 [2024-12-06 13:38:59.641676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:06.568 [2024-12-06 13:38:59.641696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:06.568 [2024-12-06 13:38:59.641712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.568 [2024-12-06 13:38:59.641780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.568 [2024-12-06 13:38:59.641798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:06.568 [2024-12-06 13:38:59.641812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:44:06.568 [2024-12-06 13:38:59.641827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.568 [2024-12-06 13:38:59.641851] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:06.568 [2024-12-06 13:38:59.642906] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:06.568 [2024-12-06 13:38:59.642940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.568 [2024-12-06 13:38:59.642957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:06.568 [2024-12-06 13:38:59.642971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:44:06.568 [2024-12-06 13:38:59.642987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.568 [2024-12-06 13:38:59.643027] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 11db2013-9058-4d0f-8197-773160eb9264 00:44:06.568 [2024-12-06 13:38:59.644608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.568 [2024-12-06 13:38:59.644817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:44:06.568 [2024-12-06 13:38:59.644867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:44:06.568 [2024-12-06 13:38:59.644881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.568 [2024-12-06 13:38:59.652552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.568 [2024-12-06 13:38:59.652589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:06.568 [2024-12-06 13:38:59.652607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.560 ms 00:44:06.568 [2024-12-06 13:38:59.652624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.568 [2024-12-06 13:38:59.652732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.568 [2024-12-06 13:38:59.652749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:06.568 [2024-12-06 13:38:59.652770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:44:06.568 [2024-12-06 13:38:59.652783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.568 [2024-12-06 13:38:59.652850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.568 [2024-12-06 13:38:59.652865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:06.569 [2024-12-06 13:38:59.652882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:06.569 [2024-12-06 13:38:59.652903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.569 [2024-12-06 13:38:59.652937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:06.569 [2024-12-06 13:38:59.658119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.569 [2024-12-06 13:38:59.658162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:06.569 [2024-12-06 13:38:59.658177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.198 ms 00:44:06.569 [2024-12-06 13:38:59.658197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.569 [2024-12-06 13:38:59.658234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.569 [2024-12-06 13:38:59.658251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:06.569 [2024-12-06 13:38:59.658263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:44:06.569 [2024-12-06 13:38:59.658278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.569 [2024-12-06 13:38:59.658324] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:44:06.569 [2024-12-06 13:38:59.658505] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:06.569 [2024-12-06 13:38:59.658530] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:06.569 [2024-12-06 13:38:59.658549] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:06.569 [2024-12-06 13:38:59.658565] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:06.569 [2024-12-06 13:38:59.658583] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:06.569 [2024-12-06 13:38:59.658597] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:06.569 [2024-12-06 13:38:59.658612] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:06.569 [2024-12-06 13:38:59.658625] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:06.569 [2024-12-06 13:38:59.658642] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:06.569 [2024-12-06 13:38:59.658659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.569 [2024-12-06 13:38:59.658674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:06.569 [2024-12-06 13:38:59.658688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:44:06.569 [2024-12-06 13:38:59.658702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.569 [2024-12-06 13:38:59.658780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.569 [2024-12-06 13:38:59.658798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:06.569 [2024-12-06 13:38:59.658812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:44:06.569 [2024-12-06 13:38:59.658829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.569 [2024-12-06 13:38:59.658912] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:06.569 [2024-12-06 13:38:59.658934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:06.569 [2024-12-06 13:38:59.658947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:06.569 [2024-12-06 13:38:59.658963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:06.569 [2024-12-06 13:38:59.658976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:06.569 [2024-12-06 13:38:59.658991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:06.569 [2024-12-06 13:38:59.659017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:06.569 [2024-12-06 13:38:59.659029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:06.569 [2024-12-06 13:38:59.659055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:06.569 [2024-12-06 13:38:59.659087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:06.569 [2024-12-06 13:38:59.659099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:06.569 [2024-12-06 13:38:59.659114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:06.569 [2024-12-06 13:38:59.659127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:06.569 [2024-12-06 13:38:59.659145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:06.569 [2024-12-06 13:38:59.659171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:06.569 [2024-12-06 13:38:59.659184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:06.569 [2024-12-06 13:38:59.659211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:06.569 [2024-12-06 13:38:59.659237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:06.569 [2024-12-06 13:38:59.659252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:06.569 [2024-12-06 13:38:59.659279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:06.569 [2024-12-06 13:38:59.659290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:06.569 [2024-12-06 13:38:59.659316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:06.569 [2024-12-06 13:38:59.659330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:06.569 [2024-12-06 13:38:59.659358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:06.569 [2024-12-06 13:38:59.659370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:06.569 [2024-12-06 13:38:59.659395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:06.569 [2024-12-06 13:38:59.659421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:06.569 [2024-12-06 13:38:59.659434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:06.569 [2024-12-06 13:38:59.659450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:06.569 [2024-12-06 13:38:59.659462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:06.569 [2024-12-06 13:38:59.659476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:06.569 [2024-12-06 13:38:59.659502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:06.569 [2024-12-06 13:38:59.659513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659527] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:06.569 [2024-12-06 13:38:59.659540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:06.569 [2024-12-06 13:38:59.659564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:06.569 [2024-12-06 13:38:59.659576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:06.569 [2024-12-06 13:38:59.659595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:06.569 [2024-12-06 13:38:59.659607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:06.569 [2024-12-06 13:38:59.659623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:06.569 [2024-12-06 13:38:59.659637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:06.569 [2024-12-06 13:38:59.659651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:06.569 [2024-12-06 13:38:59.659663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:06.569 [2024-12-06 13:38:59.659679] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:06.569 [2024-12-06 13:38:59.659694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:06.569 [2024-12-06 13:38:59.659711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:06.569 [2024-12-06 13:38:59.659724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:06.569 [2024-12-06 13:38:59.659739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:06.569 [2024-12-06 13:38:59.659752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:06.569 [2024-12-06 13:38:59.659767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:06.569 [2024-12-06 13:38:59.659779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:06.569 [2024-12-06 13:38:59.659795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:06.569 [2024-12-06 13:38:59.659807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:06.569 [2024-12-06 13:38:59.659826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:06.569 [2024-12-06 13:38:59.659838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:06.569 [2024-12-06 13:38:59.659853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:06.569 [2024-12-06 13:38:59.659866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:06.569 [2024-12-06 13:38:59.659881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:06.569 [2024-12-06 13:38:59.659894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:06.569 [2024-12-06 13:38:59.659909] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:06.569 [2024-12-06 13:38:59.659922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:06.569 [2024-12-06 13:38:59.659942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:06.569 [2024-12-06 13:38:59.659955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:06.569 [2024-12-06 13:38:59.659979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:06.569 [2024-12-06 13:38:59.659993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:06.569 [2024-12-06 13:38:59.660009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:06.569 [2024-12-06 13:38:59.660022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:06.569 [2024-12-06 13:38:59.660038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:44:06.569 [2024-12-06 13:38:59.660050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:06.569 [2024-12-06 13:38:59.660100] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:44:06.569 [2024-12-06 13:38:59.660115] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:44:10.753 [2024-12-06 13:39:03.174864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.174936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:44:10.753 [2024-12-06 13:39:03.174959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3514.737 ms 00:44:10.753 [2024-12-06 13:39:03.174973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.214217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.214276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:10.753 [2024-12-06 13:39:03.214298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.925 ms 00:44:10.753 [2024-12-06 13:39:03.214311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.214481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.214499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:10.753 [2024-12-06 13:39:03.214519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:44:10.753 [2024-12-06 13:39:03.214532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.271877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.271930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:10.753 [2024-12-06 13:39:03.271950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.272 ms 00:44:10.753 [2024-12-06 13:39:03.271964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.272015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.272028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:10.753 [2024-12-06 13:39:03.272046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:10.753 [2024-12-06 13:39:03.272064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.272582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.272600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:10.753 [2024-12-06 13:39:03.272618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:44:10.753 [2024-12-06 13:39:03.272631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.272747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.272764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:10.753 [2024-12-06 13:39:03.272783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:44:10.753 [2024-12-06 13:39:03.272795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.292717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.292766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:10.753 [2024-12-06 13:39:03.292787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.893 ms 00:44:10.753 [2024-12-06 13:39:03.292816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.305985] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:44:10.753 [2024-12-06 13:39:03.312127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.312175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:10.753 [2024-12-06 13:39:03.312192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.207 ms 00:44:10.753 [2024-12-06 13:39:03.312208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.401684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.401749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:44:10.753 [2024-12-06 13:39:03.401767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.432 ms 00:44:10.753 [2024-12-06 13:39:03.401784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.401973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.401995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:10.753 [2024-12-06 13:39:03.402010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:44:10.753 [2024-12-06 13:39:03.402029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.439849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.439903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:44:10.753 [2024-12-06 13:39:03.439920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.745 ms 00:44:10.753 [2024-12-06 13:39:03.439937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.476368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.476427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:44:10.753 [2024-12-06 13:39:03.476445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.385 ms 00:44:10.753 [2024-12-06 13:39:03.476462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.477246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.477284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:10.753 [2024-12-06 13:39:03.477299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:44:10.753 [2024-12-06 13:39:03.477314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.580684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.580758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:44:10.753 [2024-12-06 13:39:03.580777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.305 ms 00:44:10.753 [2024-12-06 13:39:03.580794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.619562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.619633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:44:10.753 [2024-12-06 13:39:03.619653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.659 ms 00:44:10.753 [2024-12-06 13:39:03.619670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.753 [2024-12-06 13:39:03.656838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.753 [2024-12-06 13:39:03.656903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:44:10.753 [2024-12-06 13:39:03.656919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.122 ms 00:44:10.754 [2024-12-06 13:39:03.656934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.754 [2024-12-06 13:39:03.695707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.754 [2024-12-06 13:39:03.695780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:10.754 [2024-12-06 13:39:03.695797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.605 ms 00:44:10.754 [2024-12-06 13:39:03.695812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.754 [2024-12-06 13:39:03.695859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.754 [2024-12-06 13:39:03.695880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:10.754 [2024-12-06 13:39:03.695892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:10.754 [2024-12-06 13:39:03.695906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.754 [2024-12-06 13:39:03.696023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.754 [2024-12-06 13:39:03.696040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:10.754 [2024-12-06 13:39:03.696052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:44:10.754 [2024-12-06 13:39:03.696067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.754 [2024-12-06 13:39:03.697673] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4055.448 ms, result 0 00:44:10.754 { 00:44:10.754 "name": "ftl0", 00:44:10.754 "uuid": "11db2013-9058-4d0f-8197-773160eb9264" 00:44:10.754 } 00:44:10.754 13:39:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:44:10.754 13:39:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:44:10.754 13:39:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:44:11.012 13:39:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:44:11.012 [2024-12-06 13:39:04.102022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:44:11.012 I/O size of 69632 is greater than zero copy threshold (65536). 00:44:11.012 Zero copy mechanism will not be used. 00:44:11.012 Running I/O for 4 seconds... 00:44:13.322 1787.00 IOPS, 118.67 MiB/s [2024-12-06T13:39:07.441Z] 1764.00 IOPS, 117.14 MiB/s [2024-12-06T13:39:08.378Z] 1791.67 IOPS, 118.98 MiB/s [2024-12-06T13:39:08.378Z] 1813.75 IOPS, 120.44 MiB/s 00:44:15.278 Latency(us) 00:44:15.278 [2024-12-06T13:39:08.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:15.278 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:44:15.278 ftl0 : 4.00 1813.25 120.41 0.00 0.00 577.61 189.20 16727.28 00:44:15.278 [2024-12-06T13:39:08.378Z] =================================================================================================================== 00:44:15.278 [2024-12-06T13:39:08.378Z] Total : 1813.25 120.41 0.00 0.00 577.61 189.20 16727.28 00:44:15.278 [2024-12-06 13:39:08.114886] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:44:15.278 { 00:44:15.278 "results": [ 00:44:15.278 { 00:44:15.278 "job": "ftl0", 00:44:15.278 "core_mask": "0x1", 00:44:15.278 "workload": "randwrite", 00:44:15.278 "status": "finished", 00:44:15.278 "queue_depth": 1, 00:44:15.278 "io_size": 69632, 00:44:15.278 "runtime": 4.001656, 00:44:15.278 "iops": 1813.2493147836797, 00:44:15.278 "mibps": 120.41108730985373, 00:44:15.278 "io_failed": 0, 00:44:15.278 "io_timeout": 0, 00:44:15.278 "avg_latency_us": 577.6146668766735, 00:44:15.278 "min_latency_us": 189.19619047619048, 00:44:15.278 "max_latency_us": 16727.28380952381 00:44:15.278 } 00:44:15.278 ], 00:44:15.278 "core_count": 1 00:44:15.278 } 00:44:15.278 13:39:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:44:15.278 [2024-12-06 13:39:08.274011] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:44:15.278 Running I/O for 4 seconds... 00:44:17.586 9495.00 IOPS, 37.09 MiB/s [2024-12-06T13:39:11.619Z] 8977.00 IOPS, 35.07 MiB/s [2024-12-06T13:39:12.553Z] 8906.33 IOPS, 34.79 MiB/s [2024-12-06T13:39:12.553Z] 8923.50 IOPS, 34.86 MiB/s 00:44:19.453 Latency(us) 00:44:19.453 [2024-12-06T13:39:12.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:19.453 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:44:19.453 ftl0 : 4.02 8916.45 34.83 0.00 0.00 14324.31 312.08 33454.57 00:44:19.453 [2024-12-06T13:39:12.553Z] =================================================================================================================== 00:44:19.453 [2024-12-06T13:39:12.553Z] Total : 8916.45 34.83 0.00 0.00 14324.31 0.00 33454.57 00:44:19.453 [2024-12-06 13:39:12.303827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:44:19.453 { 00:44:19.453 "results": [ 00:44:19.453 { 00:44:19.453 "job": "ftl0", 00:44:19.453 "core_mask": "0x1", 00:44:19.453 "workload": "randwrite", 00:44:19.453 "status": "finished", 00:44:19.453 "queue_depth": 128, 00:44:19.453 "io_size": 4096, 00:44:19.453 "runtime": 4.017519, 00:44:19.453 "iops": 8916.448186057116, 00:44:19.453 "mibps": 34.82987572678561, 00:44:19.453 "io_failed": 0, 00:44:19.453 "io_timeout": 0, 00:44:19.453 "avg_latency_us": 14324.312715303975, 00:44:19.453 "min_latency_us": 312.0761904761905, 00:44:19.453 "max_latency_us": 33454.56761904762 00:44:19.453 } 00:44:19.453 ], 00:44:19.453 "core_count": 1 00:44:19.453 } 00:44:19.453 13:39:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:44:19.453 [2024-12-06 13:39:12.458563] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:44:19.453 Running I/O for 4 seconds... 00:44:21.769 6847.00 IOPS, 26.75 MiB/s [2024-12-06T13:39:15.805Z] 7091.50 IOPS, 27.70 MiB/s [2024-12-06T13:39:16.740Z] 7195.00 IOPS, 28.11 MiB/s [2024-12-06T13:39:16.740Z] 7163.00 IOPS, 27.98 MiB/s 00:44:23.640 Latency(us) 00:44:23.640 [2024-12-06T13:39:16.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:23.640 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:23.640 Verification LBA range: start 0x0 length 0x1400000 00:44:23.640 ftl0 : 4.01 7175.82 28.03 0.00 0.00 17778.17 288.67 29210.33 00:44:23.640 [2024-12-06T13:39:16.740Z] =================================================================================================================== 00:44:23.640 [2024-12-06T13:39:16.740Z] Total : 7175.82 28.03 0.00 0.00 17778.17 0.00 29210.33 00:44:23.640 [2024-12-06 13:39:16.495192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:44:23.640 { 00:44:23.640 "results": [ 00:44:23.640 { 00:44:23.640 "job": "ftl0", 00:44:23.640 "core_mask": "0x1", 00:44:23.640 "workload": "verify", 00:44:23.640 "status": "finished", 00:44:23.640 "verify_range": { 00:44:23.640 "start": 0, 00:44:23.640 "length": 20971520 00:44:23.640 }, 00:44:23.640 "queue_depth": 128, 00:44:23.640 "io_size": 4096, 00:44:23.640 "runtime": 4.010689, 00:44:23.640 "iops": 7175.824403238446, 00:44:23.640 "mibps": 28.03056407515018, 00:44:23.640 "io_failed": 0, 00:44:23.640 "io_timeout": 0, 00:44:23.640 "avg_latency_us": 17778.174695390317, 00:44:23.640 "min_latency_us": 288.67047619047617, 00:44:23.640 "max_latency_us": 29210.33142857143 00:44:23.640 } 00:44:23.640 ], 00:44:23.640 "core_count": 1 00:44:23.640 } 00:44:23.640 13:39:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:44:23.899 [2024-12-06 13:39:16.764262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:23.899 [2024-12-06 13:39:16.764326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:23.899 [2024-12-06 13:39:16.764346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:23.899 [2024-12-06 13:39:16.764362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:23.899 [2024-12-06 13:39:16.764393] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:23.899 [2024-12-06 13:39:16.769013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:23.899 [2024-12-06 13:39:16.769040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:23.899 [2024-12-06 13:39:16.769057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.582 ms 00:44:23.899 [2024-12-06 13:39:16.769068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:23.899 [2024-12-06 13:39:16.771240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:23.899 [2024-12-06 13:39:16.771276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:23.899 [2024-12-06 13:39:16.771297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:44:23.899 [2024-12-06 13:39:16.771308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:23.899 [2024-12-06 13:39:16.949679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:23.899 [2024-12-06 13:39:16.949746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:23.899 [2024-12-06 13:39:16.949778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.337 ms 00:44:23.899 [2024-12-06 13:39:16.949792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:23.899 [2024-12-06 13:39:16.955265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:23.899 [2024-12-06 13:39:16.955310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:23.899 [2024-12-06 13:39:16.955345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.425 ms 00:44:23.899 [2024-12-06 13:39:16.955361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:23.900 [2024-12-06 13:39:16.995999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:23.900 [2024-12-06 13:39:16.996042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:23.900 [2024-12-06 13:39:16.996063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.538 ms 00:44:23.900 [2024-12-06 13:39:16.996073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.160 [2024-12-06 13:39:17.019822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.160 [2024-12-06 13:39:17.019867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:24.160 [2024-12-06 13:39:17.019886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.696 ms 00:44:24.160 [2024-12-06 13:39:17.019898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.160 [2024-12-06 13:39:17.020063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.160 [2024-12-06 13:39:17.020078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:24.160 [2024-12-06 13:39:17.020098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:44:24.160 [2024-12-06 13:39:17.020109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.160 [2024-12-06 13:39:17.057557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.160 [2024-12-06 13:39:17.057609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:24.160 [2024-12-06 13:39:17.057649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.421 ms 00:44:24.160 [2024-12-06 13:39:17.057660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.160 [2024-12-06 13:39:17.094890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.160 [2024-12-06 13:39:17.094928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:24.160 [2024-12-06 13:39:17.094946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.186 ms 00:44:24.160 [2024-12-06 13:39:17.094957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.160 [2024-12-06 13:39:17.131239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.160 [2024-12-06 13:39:17.131274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:24.160 [2024-12-06 13:39:17.131291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.237 ms 00:44:24.160 [2024-12-06 13:39:17.131302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.160 [2024-12-06 13:39:17.168188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.160 [2024-12-06 13:39:17.168239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:24.160 [2024-12-06 13:39:17.168263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.728 ms 00:44:24.160 [2024-12-06 13:39:17.168273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.160 [2024-12-06 13:39:17.168316] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:24.160 [2024-12-06 13:39:17.168337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.168983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.169002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.169015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.169030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.169042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.169058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.169071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.169086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:24.160 [2024-12-06 13:39:17.169099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:24.161 [2024-12-06 13:39:17.169769] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:24.161 [2024-12-06 13:39:17.169783] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 11db2013-9058-4d0f-8197-773160eb9264 00:44:24.161 [2024-12-06 13:39:17.169799] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:24.161 [2024-12-06 13:39:17.169812] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:24.161 [2024-12-06 13:39:17.169823] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:24.161 [2024-12-06 13:39:17.169837] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:24.161 [2024-12-06 13:39:17.169847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:24.161 [2024-12-06 13:39:17.169861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:24.161 [2024-12-06 13:39:17.169871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:24.161 [2024-12-06 13:39:17.169888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:24.161 [2024-12-06 13:39:17.169897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:24.161 [2024-12-06 13:39:17.169910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.161 [2024-12-06 13:39:17.169921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:24.161 [2024-12-06 13:39:17.169936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:44:24.161 [2024-12-06 13:39:17.169946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.161 [2024-12-06 13:39:17.192278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.161 [2024-12-06 13:39:17.192310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:24.161 [2024-12-06 13:39:17.192327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.247 ms 00:44:24.161 [2024-12-06 13:39:17.192338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.161 [2024-12-06 13:39:17.192958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:24.161 [2024-12-06 13:39:17.192976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:24.161 [2024-12-06 13:39:17.192992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:44:24.161 [2024-12-06 13:39:17.193002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.161 [2024-12-06 13:39:17.253749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.161 [2024-12-06 13:39:17.253786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:24.161 [2024-12-06 13:39:17.253808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.161 [2024-12-06 13:39:17.253820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.161 [2024-12-06 13:39:17.253896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.161 [2024-12-06 13:39:17.253907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:24.161 [2024-12-06 13:39:17.253921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.161 [2024-12-06 13:39:17.253932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.161 [2024-12-06 13:39:17.254026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.161 [2024-12-06 13:39:17.254040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:24.161 [2024-12-06 13:39:17.254054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.161 [2024-12-06 13:39:17.254065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.161 [2024-12-06 13:39:17.254087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.161 [2024-12-06 13:39:17.254098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:24.161 [2024-12-06 13:39:17.254113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.161 [2024-12-06 13:39:17.254123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.421 [2024-12-06 13:39:17.395395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.421 [2024-12-06 13:39:17.395481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:24.421 [2024-12-06 13:39:17.395508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.421 [2024-12-06 13:39:17.395519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.421 [2024-12-06 13:39:17.507350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.421 [2024-12-06 13:39:17.507415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:24.421 [2024-12-06 13:39:17.507436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.421 [2024-12-06 13:39:17.507447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.421 [2024-12-06 13:39:17.507628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.421 [2024-12-06 13:39:17.507641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:24.421 [2024-12-06 13:39:17.507656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.421 [2024-12-06 13:39:17.507667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.421 [2024-12-06 13:39:17.507736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.421 [2024-12-06 13:39:17.507748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:24.422 [2024-12-06 13:39:17.507763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.422 [2024-12-06 13:39:17.507774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.422 [2024-12-06 13:39:17.507909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.422 [2024-12-06 13:39:17.507927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:24.422 [2024-12-06 13:39:17.507946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.422 [2024-12-06 13:39:17.507956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.422 [2024-12-06 13:39:17.508001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.422 [2024-12-06 13:39:17.508013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:24.422 [2024-12-06 13:39:17.508027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.422 [2024-12-06 13:39:17.508038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.422 [2024-12-06 13:39:17.508086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.422 [2024-12-06 13:39:17.508101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:24.422 [2024-12-06 13:39:17.508115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.422 [2024-12-06 13:39:17.508138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.422 [2024-12-06 13:39:17.508195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:24.422 [2024-12-06 13:39:17.508207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:24.422 [2024-12-06 13:39:17.508221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:24.422 [2024-12-06 13:39:17.508232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:24.422 [2024-12-06 13:39:17.508389] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 744.079 ms, result 0 00:44:24.422 true 00:44:24.681 13:39:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78765 00:44:24.681 13:39:17 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78765 ']' 00:44:24.681 13:39:17 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78765 00:44:24.681 13:39:17 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:44:24.681 13:39:17 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:24.681 13:39:17 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78765 00:44:24.681 killing process with pid 78765 00:44:24.681 Received shutdown signal, test time was about 4.000000 seconds 00:44:24.681 00:44:24.681 Latency(us) 00:44:24.681 [2024-12-06T13:39:17.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:24.681 [2024-12-06T13:39:17.781Z] =================================================================================================================== 00:44:24.681 [2024-12-06T13:39:17.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:24.681 13:39:17 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:24.681 13:39:17 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:24.681 13:39:17 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78765' 00:44:24.681 13:39:17 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78765 00:44:24.681 13:39:17 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78765 00:44:26.061 Remove shared memory files 00:44:26.061 13:39:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:44:26.061 13:39:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:44:26.061 13:39:19 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:44:26.061 13:39:19 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:44:26.061 13:39:19 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:44:26.061 13:39:19 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:44:26.061 13:39:19 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:44:26.061 13:39:19 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:44:26.061 00:44:26.061 real 0m23.843s 00:44:26.061 user 0m26.661s 00:44:26.061 sys 0m1.413s 00:44:26.061 13:39:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:26.061 13:39:19 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:26.061 ************************************ 00:44:26.061 END TEST ftl_bdevperf 00:44:26.061 ************************************ 00:44:26.061 13:39:19 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:44:26.061 13:39:19 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:44:26.061 13:39:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:26.061 13:39:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:44:26.061 ************************************ 00:44:26.061 START TEST ftl_trim 00:44:26.061 ************************************ 00:44:26.061 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:44:26.322 * Looking for test storage... 00:44:26.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:44:26.322 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:26.322 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:44:26.322 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:26.322 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:26.322 13:39:19 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:44:26.322 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:26.322 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:26.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.322 --rc genhtml_branch_coverage=1 00:44:26.322 --rc genhtml_function_coverage=1 00:44:26.322 --rc genhtml_legend=1 00:44:26.322 --rc geninfo_all_blocks=1 00:44:26.322 --rc geninfo_unexecuted_blocks=1 00:44:26.322 00:44:26.322 ' 00:44:26.322 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:26.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.322 --rc genhtml_branch_coverage=1 00:44:26.322 --rc genhtml_function_coverage=1 00:44:26.322 --rc genhtml_legend=1 00:44:26.322 --rc geninfo_all_blocks=1 00:44:26.322 --rc geninfo_unexecuted_blocks=1 00:44:26.322 00:44:26.322 ' 00:44:26.322 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:26.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.322 --rc genhtml_branch_coverage=1 00:44:26.322 --rc genhtml_function_coverage=1 00:44:26.322 --rc genhtml_legend=1 00:44:26.322 --rc geninfo_all_blocks=1 00:44:26.322 --rc geninfo_unexecuted_blocks=1 00:44:26.322 00:44:26.322 ' 00:44:26.322 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:26.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.322 --rc genhtml_branch_coverage=1 00:44:26.322 --rc genhtml_function_coverage=1 00:44:26.322 --rc genhtml_legend=1 00:44:26.322 --rc geninfo_all_blocks=1 00:44:26.322 --rc geninfo_unexecuted_blocks=1 00:44:26.322 00:44:26.322 ' 00:44:26.322 13:39:19 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:44:26.322 13:39:19 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79122 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79122 00:44:26.323 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79122 ']' 00:44:26.323 13:39:19 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:44:26.323 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:26.323 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:26.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:26.323 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:26.323 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:26.323 13:39:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:44:26.583 [2024-12-06 13:39:19.516930] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:44:26.583 [2024-12-06 13:39:19.517789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79122 ] 00:44:26.842 [2024-12-06 13:39:19.724333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:26.842 [2024-12-06 13:39:19.876589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:26.842 [2024-12-06 13:39:19.876716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:26.842 [2024-12-06 13:39:19.876747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:28.222 13:39:20 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:28.222 13:39:20 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:44:28.222 13:39:20 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:44:28.222 13:39:20 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:44:28.222 13:39:20 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:44:28.222 13:39:20 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:44:28.222 13:39:20 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:44:28.222 13:39:20 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:44:28.485 13:39:21 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:44:28.485 13:39:21 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:44:28.486 13:39:21 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:44:28.486 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:44:28.486 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:28.486 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:44:28.486 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:44:28.486 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:44:28.745 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:28.745 { 00:44:28.745 "name": "nvme0n1", 00:44:28.745 "aliases": [ 00:44:28.745 "f6f14bd8-18ee-4ce5-af27-111274752f78" 00:44:28.745 ], 00:44:28.745 "product_name": "NVMe disk", 00:44:28.745 "block_size": 4096, 00:44:28.745 "num_blocks": 1310720, 00:44:28.745 "uuid": "f6f14bd8-18ee-4ce5-af27-111274752f78", 00:44:28.745 "numa_id": -1, 00:44:28.745 "assigned_rate_limits": { 00:44:28.745 "rw_ios_per_sec": 0, 00:44:28.745 "rw_mbytes_per_sec": 0, 00:44:28.745 "r_mbytes_per_sec": 0, 00:44:28.745 "w_mbytes_per_sec": 0 00:44:28.745 }, 00:44:28.745 "claimed": true, 00:44:28.745 "claim_type": "read_many_write_one", 00:44:28.745 "zoned": false, 00:44:28.745 "supported_io_types": { 00:44:28.745 "read": true, 00:44:28.745 "write": true, 00:44:28.745 "unmap": true, 00:44:28.745 "flush": true, 00:44:28.745 "reset": true, 00:44:28.745 "nvme_admin": true, 00:44:28.745 "nvme_io": true, 00:44:28.745 "nvme_io_md": false, 00:44:28.745 "write_zeroes": true, 00:44:28.745 "zcopy": false, 00:44:28.745 "get_zone_info": false, 00:44:28.745 "zone_management": false, 00:44:28.745 "zone_append": false, 00:44:28.745 "compare": true, 00:44:28.745 "compare_and_write": false, 00:44:28.745 "abort": true, 00:44:28.745 "seek_hole": false, 00:44:28.745 "seek_data": false, 00:44:28.745 "copy": true, 00:44:28.745 "nvme_iov_md": false 00:44:28.745 }, 00:44:28.745 "driver_specific": { 00:44:28.745 "nvme": [ 00:44:28.745 { 00:44:28.745 "pci_address": "0000:00:11.0", 00:44:28.745 "trid": { 00:44:28.745 "trtype": "PCIe", 00:44:28.745 "traddr": "0000:00:11.0" 00:44:28.745 }, 00:44:28.745 "ctrlr_data": { 00:44:28.745 "cntlid": 0, 00:44:28.745 "vendor_id": "0x1b36", 00:44:28.745 "model_number": "QEMU NVMe Ctrl", 00:44:28.745 "serial_number": "12341", 00:44:28.745 "firmware_revision": "8.0.0", 00:44:28.745 "subnqn": "nqn.2019-08.org.qemu:12341", 00:44:28.745 "oacs": { 00:44:28.745 "security": 0, 00:44:28.745 "format": 1, 00:44:28.745 "firmware": 0, 00:44:28.745 "ns_manage": 1 00:44:28.745 }, 00:44:28.745 "multi_ctrlr": false, 00:44:28.745 "ana_reporting": false 00:44:28.745 }, 00:44:28.745 "vs": { 00:44:28.745 "nvme_version": "1.4" 00:44:28.745 }, 00:44:28.745 "ns_data": { 00:44:28.745 "id": 1, 00:44:28.745 "can_share": false 00:44:28.745 } 00:44:28.745 } 00:44:28.745 ], 00:44:28.745 "mp_policy": "active_passive" 00:44:28.745 } 00:44:28.745 } 00:44:28.745 ]' 00:44:28.745 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:28.745 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:44:28.745 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:28.745 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:44:28.745 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:44:28.745 13:39:21 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:44:28.745 13:39:21 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:44:28.745 13:39:21 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:44:28.745 13:39:21 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:44:28.745 13:39:21 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:44:28.745 13:39:21 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:44:29.005 13:39:21 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=a3a6000c-4c3e-4f47-9957-ecd58fadd3c5 00:44:29.005 13:39:21 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:44:29.005 13:39:21 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a3a6000c-4c3e-4f47-9957-ecd58fadd3c5 00:44:29.264 13:39:22 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:44:29.524 13:39:22 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=e3f3d1cc-f8a7-48df-bea0-ca3f3a86a216 00:44:29.524 13:39:22 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e3f3d1cc-f8a7-48df-bea0-ca3f3a86a216 00:44:29.524 13:39:22 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:29.524 13:39:22 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:29.524 13:39:22 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:44:29.524 13:39:22 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:44:29.524 13:39:22 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:29.524 13:39:22 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:44:29.524 13:39:22 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:29.524 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:29.524 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:29.524 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:44:29.524 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:44:29.524 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:29.784 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:29.784 { 00:44:29.784 "name": "8c9a06c2-3bae-4f15-9cb6-99cea3e87499", 00:44:29.784 "aliases": [ 00:44:29.784 "lvs/nvme0n1p0" 00:44:29.784 ], 00:44:29.784 "product_name": "Logical Volume", 00:44:29.784 "block_size": 4096, 00:44:29.784 "num_blocks": 26476544, 00:44:29.784 "uuid": "8c9a06c2-3bae-4f15-9cb6-99cea3e87499", 00:44:29.784 "assigned_rate_limits": { 00:44:29.784 "rw_ios_per_sec": 0, 00:44:29.784 "rw_mbytes_per_sec": 0, 00:44:29.784 "r_mbytes_per_sec": 0, 00:44:29.784 "w_mbytes_per_sec": 0 00:44:29.784 }, 00:44:29.784 "claimed": false, 00:44:29.784 "zoned": false, 00:44:29.784 "supported_io_types": { 00:44:29.784 "read": true, 00:44:29.784 "write": true, 00:44:29.784 "unmap": true, 00:44:29.784 "flush": false, 00:44:29.784 "reset": true, 00:44:29.784 "nvme_admin": false, 00:44:29.784 "nvme_io": false, 00:44:29.784 "nvme_io_md": false, 00:44:29.784 "write_zeroes": true, 00:44:29.784 "zcopy": false, 00:44:29.784 "get_zone_info": false, 00:44:29.784 "zone_management": false, 00:44:29.784 "zone_append": false, 00:44:29.784 "compare": false, 00:44:29.784 "compare_and_write": false, 00:44:29.784 "abort": false, 00:44:29.784 "seek_hole": true, 00:44:29.784 "seek_data": true, 00:44:29.784 "copy": false, 00:44:29.784 "nvme_iov_md": false 00:44:29.784 }, 00:44:29.784 "driver_specific": { 00:44:29.784 "lvol": { 00:44:29.784 "lvol_store_uuid": "e3f3d1cc-f8a7-48df-bea0-ca3f3a86a216", 00:44:29.784 "base_bdev": "nvme0n1", 00:44:29.784 "thin_provision": true, 00:44:29.784 "num_allocated_clusters": 0, 00:44:29.784 "snapshot": false, 00:44:29.784 "clone": false, 00:44:29.784 "esnap_clone": false 00:44:29.784 } 00:44:29.784 } 00:44:29.784 } 00:44:29.784 ]' 00:44:29.784 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:30.044 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:44:30.044 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:30.044 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:30.044 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:30.044 13:39:22 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:44:30.044 13:39:22 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:44:30.044 13:39:22 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:44:30.044 13:39:22 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:44:30.303 13:39:23 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:44:30.303 13:39:23 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:44:30.303 13:39:23 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:30.303 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:30.303 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:30.303 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:44:30.303 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:44:30.303 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:30.570 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:30.570 { 00:44:30.570 "name": "8c9a06c2-3bae-4f15-9cb6-99cea3e87499", 00:44:30.570 "aliases": [ 00:44:30.570 "lvs/nvme0n1p0" 00:44:30.570 ], 00:44:30.570 "product_name": "Logical Volume", 00:44:30.570 "block_size": 4096, 00:44:30.570 "num_blocks": 26476544, 00:44:30.570 "uuid": "8c9a06c2-3bae-4f15-9cb6-99cea3e87499", 00:44:30.570 "assigned_rate_limits": { 00:44:30.570 "rw_ios_per_sec": 0, 00:44:30.570 "rw_mbytes_per_sec": 0, 00:44:30.570 "r_mbytes_per_sec": 0, 00:44:30.570 "w_mbytes_per_sec": 0 00:44:30.570 }, 00:44:30.570 "claimed": false, 00:44:30.570 "zoned": false, 00:44:30.570 "supported_io_types": { 00:44:30.570 "read": true, 00:44:30.570 "write": true, 00:44:30.570 "unmap": true, 00:44:30.570 "flush": false, 00:44:30.570 "reset": true, 00:44:30.570 "nvme_admin": false, 00:44:30.570 "nvme_io": false, 00:44:30.570 "nvme_io_md": false, 00:44:30.570 "write_zeroes": true, 00:44:30.570 "zcopy": false, 00:44:30.570 "get_zone_info": false, 00:44:30.570 "zone_management": false, 00:44:30.571 "zone_append": false, 00:44:30.571 "compare": false, 00:44:30.571 "compare_and_write": false, 00:44:30.571 "abort": false, 00:44:30.571 "seek_hole": true, 00:44:30.571 "seek_data": true, 00:44:30.571 "copy": false, 00:44:30.571 "nvme_iov_md": false 00:44:30.571 }, 00:44:30.571 "driver_specific": { 00:44:30.571 "lvol": { 00:44:30.571 "lvol_store_uuid": "e3f3d1cc-f8a7-48df-bea0-ca3f3a86a216", 00:44:30.571 "base_bdev": "nvme0n1", 00:44:30.571 "thin_provision": true, 00:44:30.571 "num_allocated_clusters": 0, 00:44:30.571 "snapshot": false, 00:44:30.571 "clone": false, 00:44:30.571 "esnap_clone": false 00:44:30.571 } 00:44:30.571 } 00:44:30.571 } 00:44:30.571 ]' 00:44:30.571 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:30.571 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:44:30.571 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:30.571 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:30.571 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:30.571 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:44:30.571 13:39:23 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:44:30.571 13:39:23 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:44:30.857 13:39:23 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:44:30.857 13:39:23 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:44:30.857 13:39:23 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:30.857 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:30.857 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:44:30.857 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:44:30.857 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:44:30.857 13:39:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c9a06c2-3bae-4f15-9cb6-99cea3e87499 00:44:31.124 13:39:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:44:31.124 { 00:44:31.124 "name": "8c9a06c2-3bae-4f15-9cb6-99cea3e87499", 00:44:31.124 "aliases": [ 00:44:31.124 "lvs/nvme0n1p0" 00:44:31.124 ], 00:44:31.124 "product_name": "Logical Volume", 00:44:31.124 "block_size": 4096, 00:44:31.124 "num_blocks": 26476544, 00:44:31.124 "uuid": "8c9a06c2-3bae-4f15-9cb6-99cea3e87499", 00:44:31.124 "assigned_rate_limits": { 00:44:31.124 "rw_ios_per_sec": 0, 00:44:31.124 "rw_mbytes_per_sec": 0, 00:44:31.124 "r_mbytes_per_sec": 0, 00:44:31.124 "w_mbytes_per_sec": 0 00:44:31.124 }, 00:44:31.124 "claimed": false, 00:44:31.124 "zoned": false, 00:44:31.124 "supported_io_types": { 00:44:31.124 "read": true, 00:44:31.124 "write": true, 00:44:31.124 "unmap": true, 00:44:31.124 "flush": false, 00:44:31.124 "reset": true, 00:44:31.124 "nvme_admin": false, 00:44:31.124 "nvme_io": false, 00:44:31.124 "nvme_io_md": false, 00:44:31.124 "write_zeroes": true, 00:44:31.124 "zcopy": false, 00:44:31.124 "get_zone_info": false, 00:44:31.124 "zone_management": false, 00:44:31.124 "zone_append": false, 00:44:31.124 "compare": false, 00:44:31.124 "compare_and_write": false, 00:44:31.124 "abort": false, 00:44:31.124 "seek_hole": true, 00:44:31.124 "seek_data": true, 00:44:31.124 "copy": false, 00:44:31.124 "nvme_iov_md": false 00:44:31.124 }, 00:44:31.124 "driver_specific": { 00:44:31.124 "lvol": { 00:44:31.124 "lvol_store_uuid": "e3f3d1cc-f8a7-48df-bea0-ca3f3a86a216", 00:44:31.124 "base_bdev": "nvme0n1", 00:44:31.124 "thin_provision": true, 00:44:31.124 "num_allocated_clusters": 0, 00:44:31.124 "snapshot": false, 00:44:31.124 "clone": false, 00:44:31.124 "esnap_clone": false 00:44:31.124 } 00:44:31.124 } 00:44:31.124 } 00:44:31.124 ]' 00:44:31.124 13:39:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:44:31.124 13:39:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:44:31.124 13:39:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:44:31.124 13:39:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:44:31.124 13:39:24 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:44:31.124 13:39:24 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:44:31.124 13:39:24 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:44:31.124 13:39:24 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8c9a06c2-3bae-4f15-9cb6-99cea3e87499 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:44:31.385 [2024-12-06 13:39:24.309146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.385 [2024-12-06 13:39:24.309204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:31.385 [2024-12-06 13:39:24.309228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:31.385 [2024-12-06 13:39:24.309240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.385 [2024-12-06 13:39:24.313418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.385 [2024-12-06 13:39:24.313459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:31.385 [2024-12-06 13:39:24.313475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.108 ms 00:44:31.385 [2024-12-06 13:39:24.313486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.385 [2024-12-06 13:39:24.313673] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:31.385 [2024-12-06 13:39:24.314820] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:31.385 [2024-12-06 13:39:24.314862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.385 [2024-12-06 13:39:24.314875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:31.385 [2024-12-06 13:39:24.314890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:44:31.385 [2024-12-06 13:39:24.314902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.385 [2024-12-06 13:39:24.315083] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 484dddcb-97b5-4cf1-8d97-550a0be11fc7 00:44:31.385 [2024-12-06 13:39:24.317685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.385 [2024-12-06 13:39:24.317722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:44:31.385 [2024-12-06 13:39:24.317736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:44:31.385 [2024-12-06 13:39:24.317750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.385 [2024-12-06 13:39:24.332409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.385 [2024-12-06 13:39:24.332449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:31.385 [2024-12-06 13:39:24.332468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.531 ms 00:44:31.385 [2024-12-06 13:39:24.332483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.385 [2024-12-06 13:39:24.332706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.385 [2024-12-06 13:39:24.332725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:31.385 [2024-12-06 13:39:24.332737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:44:31.385 [2024-12-06 13:39:24.332757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.385 [2024-12-06 13:39:24.332815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.385 [2024-12-06 13:39:24.332831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:31.385 [2024-12-06 13:39:24.332842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:44:31.385 [2024-12-06 13:39:24.332859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.385 [2024-12-06 13:39:24.332910] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:31.385 [2024-12-06 13:39:24.339540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.385 [2024-12-06 13:39:24.339605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:31.385 [2024-12-06 13:39:24.339625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.635 ms 00:44:31.385 [2024-12-06 13:39:24.339636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.385 [2024-12-06 13:39:24.339741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.385 [2024-12-06 13:39:24.339773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:31.385 [2024-12-06 13:39:24.339789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:31.385 [2024-12-06 13:39:24.339800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.385 [2024-12-06 13:39:24.339872] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:44:31.385 [2024-12-06 13:39:24.340021] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:31.385 [2024-12-06 13:39:24.340043] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:31.385 [2024-12-06 13:39:24.340058] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:31.385 [2024-12-06 13:39:24.340076] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:31.385 [2024-12-06 13:39:24.340089] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:31.385 [2024-12-06 13:39:24.340104] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:31.385 [2024-12-06 13:39:24.340115] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:31.385 [2024-12-06 13:39:24.340130] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:31.385 [2024-12-06 13:39:24.340144] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:31.385 [2024-12-06 13:39:24.340158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.385 [2024-12-06 13:39:24.340169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:31.385 [2024-12-06 13:39:24.340183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:44:31.385 [2024-12-06 13:39:24.340193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.385 [2024-12-06 13:39:24.340311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.385 [2024-12-06 13:39:24.340322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:31.385 [2024-12-06 13:39:24.340336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:44:31.385 [2024-12-06 13:39:24.340345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.385 [2024-12-06 13:39:24.340518] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:31.385 [2024-12-06 13:39:24.340546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:31.385 [2024-12-06 13:39:24.340561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:31.385 [2024-12-06 13:39:24.340572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:31.385 [2024-12-06 13:39:24.340586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:31.385 [2024-12-06 13:39:24.340596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:31.385 [2024-12-06 13:39:24.340608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:31.386 [2024-12-06 13:39:24.340618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:31.386 [2024-12-06 13:39:24.340631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:31.386 [2024-12-06 13:39:24.340640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:31.386 [2024-12-06 13:39:24.340654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:31.386 [2024-12-06 13:39:24.340663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:31.386 [2024-12-06 13:39:24.340676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:31.386 [2024-12-06 13:39:24.340687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:31.386 [2024-12-06 13:39:24.340699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:44:31.386 [2024-12-06 13:39:24.340708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:31.386 [2024-12-06 13:39:24.340724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:31.386 [2024-12-06 13:39:24.340733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:44:31.386 [2024-12-06 13:39:24.340745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:31.386 [2024-12-06 13:39:24.340755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:31.386 [2024-12-06 13:39:24.340768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:31.386 [2024-12-06 13:39:24.340777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:31.386 [2024-12-06 13:39:24.340792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:31.386 [2024-12-06 13:39:24.340801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:31.386 [2024-12-06 13:39:24.340814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:31.386 [2024-12-06 13:39:24.340823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:31.386 [2024-12-06 13:39:24.340836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:31.386 [2024-12-06 13:39:24.340845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:31.386 [2024-12-06 13:39:24.340858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:31.386 [2024-12-06 13:39:24.340867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:44:31.386 [2024-12-06 13:39:24.340880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:31.386 [2024-12-06 13:39:24.340889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:31.386 [2024-12-06 13:39:24.340905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:44:31.386 [2024-12-06 13:39:24.340915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:31.386 [2024-12-06 13:39:24.340933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:31.386 [2024-12-06 13:39:24.340943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:44:31.386 [2024-12-06 13:39:24.340957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:31.386 [2024-12-06 13:39:24.340967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:31.386 [2024-12-06 13:39:24.340979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:44:31.386 [2024-12-06 13:39:24.340989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:31.386 [2024-12-06 13:39:24.341002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:31.386 [2024-12-06 13:39:24.341011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:44:31.386 [2024-12-06 13:39:24.341023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:31.386 [2024-12-06 13:39:24.341033] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:31.386 [2024-12-06 13:39:24.341047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:31.386 [2024-12-06 13:39:24.341057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:31.386 [2024-12-06 13:39:24.341071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:31.386 [2024-12-06 13:39:24.341082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:31.386 [2024-12-06 13:39:24.341097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:31.386 [2024-12-06 13:39:24.341107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:31.386 [2024-12-06 13:39:24.341119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:31.386 [2024-12-06 13:39:24.341129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:31.386 [2024-12-06 13:39:24.341141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:31.386 [2024-12-06 13:39:24.341153] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:31.386 [2024-12-06 13:39:24.341170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:31.386 [2024-12-06 13:39:24.341185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:31.386 [2024-12-06 13:39:24.341198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:44:31.386 [2024-12-06 13:39:24.341209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:44:31.386 [2024-12-06 13:39:24.341223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:44:31.386 [2024-12-06 13:39:24.341234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:44:31.386 [2024-12-06 13:39:24.341248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:44:31.386 [2024-12-06 13:39:24.341259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:44:31.386 [2024-12-06 13:39:24.341274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:44:31.386 [2024-12-06 13:39:24.341285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:44:31.386 [2024-12-06 13:39:24.341302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:44:31.386 [2024-12-06 13:39:24.341313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:44:31.386 [2024-12-06 13:39:24.341326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:44:31.386 [2024-12-06 13:39:24.341337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:44:31.386 [2024-12-06 13:39:24.341350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:44:31.386 [2024-12-06 13:39:24.341361] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:31.386 [2024-12-06 13:39:24.341380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:31.386 [2024-12-06 13:39:24.341391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:31.386 [2024-12-06 13:39:24.341417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:31.386 [2024-12-06 13:39:24.341427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:31.386 [2024-12-06 13:39:24.341442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:31.386 [2024-12-06 13:39:24.341455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:31.386 [2024-12-06 13:39:24.341469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:31.386 [2024-12-06 13:39:24.341479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:44:31.386 [2024-12-06 13:39:24.341493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:31.386 [2024-12-06 13:39:24.341636] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:44:31.386 [2024-12-06 13:39:24.341657] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:44:34.675 [2024-12-06 13:39:27.209076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.209389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:44:34.675 [2024-12-06 13:39:27.209435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2867.420 ms 00:44:34.675 [2024-12-06 13:39:27.209452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.675 [2024-12-06 13:39:27.260325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.260584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:34.675 [2024-12-06 13:39:27.260612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.423 ms 00:44:34.675 [2024-12-06 13:39:27.260629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.675 [2024-12-06 13:39:27.260826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.260845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:34.675 [2024-12-06 13:39:27.260879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:44:34.675 [2024-12-06 13:39:27.260899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.675 [2024-12-06 13:39:27.329642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.329701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:34.675 [2024-12-06 13:39:27.329718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.685 ms 00:44:34.675 [2024-12-06 13:39:27.329735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.675 [2024-12-06 13:39:27.329922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.329942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:34.675 [2024-12-06 13:39:27.329954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:34.675 [2024-12-06 13:39:27.329970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.675 [2024-12-06 13:39:27.330835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.330869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:34.675 [2024-12-06 13:39:27.330882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:44:34.675 [2024-12-06 13:39:27.330896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.675 [2024-12-06 13:39:27.331043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.331058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:34.675 [2024-12-06 13:39:27.331087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:44:34.675 [2024-12-06 13:39:27.331106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.675 [2024-12-06 13:39:27.358873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.358926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:34.675 [2024-12-06 13:39:27.358944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.722 ms 00:44:34.675 [2024-12-06 13:39:27.358959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.675 [2024-12-06 13:39:27.374030] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:34.675 [2024-12-06 13:39:27.401859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.401930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:34.675 [2024-12-06 13:39:27.401952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.682 ms 00:44:34.675 [2024-12-06 13:39:27.401965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.675 [2024-12-06 13:39:27.490569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.490654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:44:34.675 [2024-12-06 13:39:27.490676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.390 ms 00:44:34.675 [2024-12-06 13:39:27.490688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.675 [2024-12-06 13:39:27.490996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.491011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:34.675 [2024-12-06 13:39:27.491038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:44:34.675 [2024-12-06 13:39:27.491049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.675 [2024-12-06 13:39:27.527694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.675 [2024-12-06 13:39:27.527734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:44:34.676 [2024-12-06 13:39:27.527753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.583 ms 00:44:34.676 [2024-12-06 13:39:27.527764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.676 [2024-12-06 13:39:27.564377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.676 [2024-12-06 13:39:27.564436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:44:34.676 [2024-12-06 13:39:27.564458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.481 ms 00:44:34.676 [2024-12-06 13:39:27.564468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.676 [2024-12-06 13:39:27.565300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.676 [2024-12-06 13:39:27.565323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:34.676 [2024-12-06 13:39:27.565339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:44:34.676 [2024-12-06 13:39:27.565350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.676 [2024-12-06 13:39:27.673699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.676 [2024-12-06 13:39:27.673777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:44:34.676 [2024-12-06 13:39:27.673803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.256 ms 00:44:34.676 [2024-12-06 13:39:27.673814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.676 [2024-12-06 13:39:27.714438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.676 [2024-12-06 13:39:27.714493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:44:34.676 [2024-12-06 13:39:27.714515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.450 ms 00:44:34.676 [2024-12-06 13:39:27.714528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.676 [2024-12-06 13:39:27.753697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.676 [2024-12-06 13:39:27.753745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:44:34.676 [2024-12-06 13:39:27.753764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.029 ms 00:44:34.676 [2024-12-06 13:39:27.753775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.936 [2024-12-06 13:39:27.794713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.936 [2024-12-06 13:39:27.794775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:34.936 [2024-12-06 13:39:27.794795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.792 ms 00:44:34.936 [2024-12-06 13:39:27.794806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.936 [2024-12-06 13:39:27.794936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.936 [2024-12-06 13:39:27.794955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:34.936 [2024-12-06 13:39:27.794975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:34.936 [2024-12-06 13:39:27.794985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.936 [2024-12-06 13:39:27.795114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:34.936 [2024-12-06 13:39:27.795126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:34.936 [2024-12-06 13:39:27.795141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:44:34.936 [2024-12-06 13:39:27.795152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:34.936 [2024-12-06 13:39:27.796698] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:34.936 [2024-12-06 13:39:27.801539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3487.147 ms, result 0 00:44:34.936 [2024-12-06 13:39:27.802628] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:34.936 { 00:44:34.936 "name": "ftl0", 00:44:34.936 "uuid": "484dddcb-97b5-4cf1-8d97-550a0be11fc7" 00:44:34.936 } 00:44:34.936 13:39:27 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:44:34.936 13:39:27 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:44:34.936 13:39:27 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:44:34.936 13:39:27 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:44:34.936 13:39:27 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:44:34.936 13:39:27 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:44:34.936 13:39:27 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:35.196 13:39:28 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:44:35.457 [ 00:44:35.457 { 00:44:35.457 "name": "ftl0", 00:44:35.457 "aliases": [ 00:44:35.457 "484dddcb-97b5-4cf1-8d97-550a0be11fc7" 00:44:35.457 ], 00:44:35.457 "product_name": "FTL disk", 00:44:35.457 "block_size": 4096, 00:44:35.457 "num_blocks": 23592960, 00:44:35.457 "uuid": "484dddcb-97b5-4cf1-8d97-550a0be11fc7", 00:44:35.457 "assigned_rate_limits": { 00:44:35.457 "rw_ios_per_sec": 0, 00:44:35.457 "rw_mbytes_per_sec": 0, 00:44:35.457 "r_mbytes_per_sec": 0, 00:44:35.457 "w_mbytes_per_sec": 0 00:44:35.457 }, 00:44:35.457 "claimed": false, 00:44:35.457 "zoned": false, 00:44:35.457 "supported_io_types": { 00:44:35.457 "read": true, 00:44:35.457 "write": true, 00:44:35.457 "unmap": true, 00:44:35.457 "flush": true, 00:44:35.457 "reset": false, 00:44:35.457 "nvme_admin": false, 00:44:35.457 "nvme_io": false, 00:44:35.457 "nvme_io_md": false, 00:44:35.457 "write_zeroes": true, 00:44:35.457 "zcopy": false, 00:44:35.457 "get_zone_info": false, 00:44:35.457 "zone_management": false, 00:44:35.457 "zone_append": false, 00:44:35.457 "compare": false, 00:44:35.457 "compare_and_write": false, 00:44:35.457 "abort": false, 00:44:35.457 "seek_hole": false, 00:44:35.457 "seek_data": false, 00:44:35.457 "copy": false, 00:44:35.457 "nvme_iov_md": false 00:44:35.457 }, 00:44:35.457 "driver_specific": { 00:44:35.457 "ftl": { 00:44:35.457 "base_bdev": "8c9a06c2-3bae-4f15-9cb6-99cea3e87499", 00:44:35.457 "cache": "nvc0n1p0" 00:44:35.457 } 00:44:35.457 } 00:44:35.457 } 00:44:35.457 ] 00:44:35.457 13:39:28 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:44:35.457 13:39:28 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:44:35.457 13:39:28 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:44:35.716 13:39:28 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:44:35.717 13:39:28 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:44:35.976 13:39:28 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:44:35.976 { 00:44:35.976 "name": "ftl0", 00:44:35.976 "aliases": [ 00:44:35.976 "484dddcb-97b5-4cf1-8d97-550a0be11fc7" 00:44:35.976 ], 00:44:35.976 "product_name": "FTL disk", 00:44:35.976 "block_size": 4096, 00:44:35.976 "num_blocks": 23592960, 00:44:35.976 "uuid": "484dddcb-97b5-4cf1-8d97-550a0be11fc7", 00:44:35.976 "assigned_rate_limits": { 00:44:35.976 "rw_ios_per_sec": 0, 00:44:35.976 "rw_mbytes_per_sec": 0, 00:44:35.976 "r_mbytes_per_sec": 0, 00:44:35.976 "w_mbytes_per_sec": 0 00:44:35.976 }, 00:44:35.976 "claimed": false, 00:44:35.976 "zoned": false, 00:44:35.976 "supported_io_types": { 00:44:35.976 "read": true, 00:44:35.976 "write": true, 00:44:35.976 "unmap": true, 00:44:35.976 "flush": true, 00:44:35.976 "reset": false, 00:44:35.976 "nvme_admin": false, 00:44:35.976 "nvme_io": false, 00:44:35.976 "nvme_io_md": false, 00:44:35.976 "write_zeroes": true, 00:44:35.976 "zcopy": false, 00:44:35.976 "get_zone_info": false, 00:44:35.976 "zone_management": false, 00:44:35.976 "zone_append": false, 00:44:35.976 "compare": false, 00:44:35.976 "compare_and_write": false, 00:44:35.976 "abort": false, 00:44:35.976 "seek_hole": false, 00:44:35.976 "seek_data": false, 00:44:35.976 "copy": false, 00:44:35.976 "nvme_iov_md": false 00:44:35.976 }, 00:44:35.976 "driver_specific": { 00:44:35.976 "ftl": { 00:44:35.976 "base_bdev": "8c9a06c2-3bae-4f15-9cb6-99cea3e87499", 00:44:35.976 "cache": "nvc0n1p0" 00:44:35.976 } 00:44:35.976 } 00:44:35.976 } 00:44:35.976 ]' 00:44:35.976 13:39:28 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:44:35.976 13:39:28 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:44:35.976 13:39:28 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:44:36.237 [2024-12-06 13:39:29.099876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.099944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:36.237 [2024-12-06 13:39:29.099969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:36.237 [2024-12-06 13:39:29.099988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.100048] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:36.237 [2024-12-06 13:39:29.104910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.104945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:36.237 [2024-12-06 13:39:29.104970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.834 ms 00:44:36.237 [2024-12-06 13:39:29.104982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.105955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.105980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:36.237 [2024-12-06 13:39:29.105996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:44:36.237 [2024-12-06 13:39:29.106007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.108941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.108969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:36.237 [2024-12-06 13:39:29.108985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.887 ms 00:44:36.237 [2024-12-06 13:39:29.108995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.114910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.115066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:36.237 [2024-12-06 13:39:29.115093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.847 ms 00:44:36.237 [2024-12-06 13:39:29.115104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.154845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.154886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:36.237 [2024-12-06 13:39:29.154909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.573 ms 00:44:36.237 [2024-12-06 13:39:29.154920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.178275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.178458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:36.237 [2024-12-06 13:39:29.178489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.226 ms 00:44:36.237 [2024-12-06 13:39:29.178506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.178875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.178891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:36.237 [2024-12-06 13:39:29.178907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:44:36.237 [2024-12-06 13:39:29.178918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.216413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.216454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:36.237 [2024-12-06 13:39:29.216474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.440 ms 00:44:36.237 [2024-12-06 13:39:29.216485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.253498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.253545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:36.237 [2024-12-06 13:39:29.253584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.899 ms 00:44:36.237 [2024-12-06 13:39:29.253595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.291429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.291475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:36.237 [2024-12-06 13:39:29.291510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.718 ms 00:44:36.237 [2024-12-06 13:39:29.291521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.328176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.237 [2024-12-06 13:39:29.328326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:36.237 [2024-12-06 13:39:29.328353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.438 ms 00:44:36.237 [2024-12-06 13:39:29.328363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.237 [2024-12-06 13:39:29.328500] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:36.237 [2024-12-06 13:39:29.328521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:36.237 [2024-12-06 13:39:29.328538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.328992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:36.238 [2024-12-06 13:39:29.329852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:36.239 [2024-12-06 13:39:29.329870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:36.239 [2024-12-06 13:39:29.329882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:36.239 [2024-12-06 13:39:29.329899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:36.239 [2024-12-06 13:39:29.329910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:36.239 [2024-12-06 13:39:29.329927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:36.239 [2024-12-06 13:39:29.329942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:36.239 [2024-12-06 13:39:29.329961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:36.239 [2024-12-06 13:39:29.329973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:36.239 [2024-12-06 13:39:29.329990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:36.239 [2024-12-06 13:39:29.330010] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:36.239 [2024-12-06 13:39:29.330029] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 484dddcb-97b5-4cf1-8d97-550a0be11fc7 00:44:36.239 [2024-12-06 13:39:29.330042] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:36.239 [2024-12-06 13:39:29.330056] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:36.239 [2024-12-06 13:39:29.330067] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:36.239 [2024-12-06 13:39:29.330087] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:36.239 [2024-12-06 13:39:29.330097] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:36.239 [2024-12-06 13:39:29.330113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:36.239 [2024-12-06 13:39:29.330124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:36.239 [2024-12-06 13:39:29.330136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:36.239 [2024-12-06 13:39:29.330146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:36.239 [2024-12-06 13:39:29.330159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.239 [2024-12-06 13:39:29.330170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:36.239 [2024-12-06 13:39:29.330184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:44:36.239 [2024-12-06 13:39:29.330194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.498 [2024-12-06 13:39:29.352532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.498 [2024-12-06 13:39:29.352571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:36.498 [2024-12-06 13:39:29.352592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.266 ms 00:44:36.498 [2024-12-06 13:39:29.352604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.498 [2024-12-06 13:39:29.353332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:36.498 [2024-12-06 13:39:29.353348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:36.498 [2024-12-06 13:39:29.353362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:44:36.499 [2024-12-06 13:39:29.353372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.499 [2024-12-06 13:39:29.429555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.499 [2024-12-06 13:39:29.429598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:36.499 [2024-12-06 13:39:29.429615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.499 [2024-12-06 13:39:29.429625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.499 [2024-12-06 13:39:29.429795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.499 [2024-12-06 13:39:29.429808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:36.499 [2024-12-06 13:39:29.429822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.499 [2024-12-06 13:39:29.429832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.499 [2024-12-06 13:39:29.429933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.499 [2024-12-06 13:39:29.429946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:36.499 [2024-12-06 13:39:29.429967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.499 [2024-12-06 13:39:29.429977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.499 [2024-12-06 13:39:29.430028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.499 [2024-12-06 13:39:29.430039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:36.499 [2024-12-06 13:39:29.430052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.499 [2024-12-06 13:39:29.430062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.499 [2024-12-06 13:39:29.578843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.499 [2024-12-06 13:39:29.581092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:36.499 [2024-12-06 13:39:29.581137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.499 [2024-12-06 13:39:29.581150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.758 [2024-12-06 13:39:29.695535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.758 [2024-12-06 13:39:29.695627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:36.758 [2024-12-06 13:39:29.695665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.758 [2024-12-06 13:39:29.695679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.758 [2024-12-06 13:39:29.695908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.758 [2024-12-06 13:39:29.695923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:36.758 [2024-12-06 13:39:29.695945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.758 [2024-12-06 13:39:29.695962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.758 [2024-12-06 13:39:29.696081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.758 [2024-12-06 13:39:29.696094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:36.758 [2024-12-06 13:39:29.696109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.758 [2024-12-06 13:39:29.696120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.758 [2024-12-06 13:39:29.696312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.758 [2024-12-06 13:39:29.696327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:36.758 [2024-12-06 13:39:29.696353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.758 [2024-12-06 13:39:29.696367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.758 [2024-12-06 13:39:29.696452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.758 [2024-12-06 13:39:29.696466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:36.758 [2024-12-06 13:39:29.696480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.758 [2024-12-06 13:39:29.696490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.758 [2024-12-06 13:39:29.696573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.758 [2024-12-06 13:39:29.696585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:36.758 [2024-12-06 13:39:29.696603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.758 [2024-12-06 13:39:29.696613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.758 [2024-12-06 13:39:29.696696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:36.758 [2024-12-06 13:39:29.696720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:36.758 [2024-12-06 13:39:29.696735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:36.758 [2024-12-06 13:39:29.696746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:36.758 [2024-12-06 13:39:29.697094] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 597.167 ms, result 0 00:44:36.758 true 00:44:36.758 13:39:29 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79122 00:44:36.758 13:39:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79122 ']' 00:44:36.758 13:39:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79122 00:44:36.758 13:39:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:44:36.758 13:39:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:36.758 13:39:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79122 00:44:36.758 13:39:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:36.758 killing process with pid 79122 00:44:36.758 13:39:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:36.758 13:39:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79122' 00:44:36.758 13:39:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79122 00:44:36.758 13:39:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79122 00:44:43.319 13:39:35 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:44:43.319 65536+0 records in 00:44:43.319 65536+0 records out 00:44:43.319 268435456 bytes (268 MB, 256 MiB) copied, 1.13735 s, 236 MB/s 00:44:43.319 13:39:36 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:43.319 [2024-12-06 13:39:36.375211] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:44:43.319 [2024-12-06 13:39:36.375357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79340 ] 00:44:43.578 [2024-12-06 13:39:36.549132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:43.837 [2024-12-06 13:39:36.695595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:44.096 [2024-12-06 13:39:37.141020] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:44.096 [2024-12-06 13:39:37.141120] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:44.357 [2024-12-06 13:39:37.309966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.310031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:44.357 [2024-12-06 13:39:37.310048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:44.357 [2024-12-06 13:39:37.310060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.313763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.313802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:44.357 [2024-12-06 13:39:37.313816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.682 ms 00:44:44.357 [2024-12-06 13:39:37.313827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.313932] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:44.357 [2024-12-06 13:39:37.315012] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:44.357 [2024-12-06 13:39:37.315047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.315059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:44.357 [2024-12-06 13:39:37.315071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:44:44.357 [2024-12-06 13:39:37.315082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.317708] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:44.357 [2024-12-06 13:39:37.338647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.338686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:44.357 [2024-12-06 13:39:37.338718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.940 ms 00:44:44.357 [2024-12-06 13:39:37.338730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.338840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.338855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:44.357 [2024-12-06 13:39:37.338867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:44:44.357 [2024-12-06 13:39:37.338878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.351492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.351525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:44.357 [2024-12-06 13:39:37.351539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.567 ms 00:44:44.357 [2024-12-06 13:39:37.351555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.351689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.351705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:44.357 [2024-12-06 13:39:37.351717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:44:44.357 [2024-12-06 13:39:37.351728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.351764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.351776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:44.357 [2024-12-06 13:39:37.351788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:44:44.357 [2024-12-06 13:39:37.351798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.351827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:44.357 [2024-12-06 13:39:37.357944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.357977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:44.357 [2024-12-06 13:39:37.357990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:44:44.357 [2024-12-06 13:39:37.358002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.358062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.358075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:44.357 [2024-12-06 13:39:37.358087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:44.357 [2024-12-06 13:39:37.358098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.358125] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:44.357 [2024-12-06 13:39:37.358155] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:44.357 [2024-12-06 13:39:37.358193] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:44.357 [2024-12-06 13:39:37.358212] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:44.357 [2024-12-06 13:39:37.358309] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:44.357 [2024-12-06 13:39:37.358323] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:44.357 [2024-12-06 13:39:37.358338] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:44.357 [2024-12-06 13:39:37.358356] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:44.357 [2024-12-06 13:39:37.358369] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:44.357 [2024-12-06 13:39:37.358381] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:44.357 [2024-12-06 13:39:37.358392] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:44.357 [2024-12-06 13:39:37.358414] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:44.357 [2024-12-06 13:39:37.358426] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:44.357 [2024-12-06 13:39:37.358437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.358448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:44.357 [2024-12-06 13:39:37.358459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:44:44.357 [2024-12-06 13:39:37.358469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.358550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.357 [2024-12-06 13:39:37.358571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:44.357 [2024-12-06 13:39:37.358583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:44:44.357 [2024-12-06 13:39:37.358594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.357 [2024-12-06 13:39:37.358687] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:44.357 [2024-12-06 13:39:37.358701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:44.357 [2024-12-06 13:39:37.358713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:44.357 [2024-12-06 13:39:37.358723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:44.357 [2024-12-06 13:39:37.358734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:44.357 [2024-12-06 13:39:37.358744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:44.357 [2024-12-06 13:39:37.358754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:44.357 [2024-12-06 13:39:37.358763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:44.358 [2024-12-06 13:39:37.358773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:44.358 [2024-12-06 13:39:37.358783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:44.358 [2024-12-06 13:39:37.358795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:44.358 [2024-12-06 13:39:37.358818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:44.358 [2024-12-06 13:39:37.358829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:44.358 [2024-12-06 13:39:37.358838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:44.358 [2024-12-06 13:39:37.358848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:44:44.358 [2024-12-06 13:39:37.358858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:44.358 [2024-12-06 13:39:37.358867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:44.358 [2024-12-06 13:39:37.358877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:44:44.358 [2024-12-06 13:39:37.358887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:44.358 [2024-12-06 13:39:37.358897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:44.358 [2024-12-06 13:39:37.358907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:44.358 [2024-12-06 13:39:37.358916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:44.358 [2024-12-06 13:39:37.358926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:44.358 [2024-12-06 13:39:37.358936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:44.358 [2024-12-06 13:39:37.358945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:44.358 [2024-12-06 13:39:37.358955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:44.358 [2024-12-06 13:39:37.358964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:44.358 [2024-12-06 13:39:37.358974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:44.358 [2024-12-06 13:39:37.358984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:44.358 [2024-12-06 13:39:37.358993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:44:44.358 [2024-12-06 13:39:37.359002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:44.358 [2024-12-06 13:39:37.359012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:44.358 [2024-12-06 13:39:37.359022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:44:44.358 [2024-12-06 13:39:37.359032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:44.358 [2024-12-06 13:39:37.359041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:44.358 [2024-12-06 13:39:37.359050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:44:44.358 [2024-12-06 13:39:37.359059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:44.358 [2024-12-06 13:39:37.359068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:44.358 [2024-12-06 13:39:37.359078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:44:44.358 [2024-12-06 13:39:37.359087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:44.358 [2024-12-06 13:39:37.359097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:44.358 [2024-12-06 13:39:37.359106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:44:44.358 [2024-12-06 13:39:37.359120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:44.358 [2024-12-06 13:39:37.359130] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:44.358 [2024-12-06 13:39:37.359141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:44.358 [2024-12-06 13:39:37.359156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:44.358 [2024-12-06 13:39:37.359166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:44.358 [2024-12-06 13:39:37.359177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:44.358 [2024-12-06 13:39:37.359187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:44.358 [2024-12-06 13:39:37.359197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:44.358 [2024-12-06 13:39:37.359207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:44.358 [2024-12-06 13:39:37.359217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:44.358 [2024-12-06 13:39:37.359226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:44.358 [2024-12-06 13:39:37.359238] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:44.358 [2024-12-06 13:39:37.359251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:44.358 [2024-12-06 13:39:37.359262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:44.358 [2024-12-06 13:39:37.359272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:44:44.358 [2024-12-06 13:39:37.359283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:44:44.358 [2024-12-06 13:39:37.359293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:44:44.358 [2024-12-06 13:39:37.359305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:44:44.358 [2024-12-06 13:39:37.359315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:44:44.358 [2024-12-06 13:39:37.359326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:44:44.358 [2024-12-06 13:39:37.359336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:44:44.358 [2024-12-06 13:39:37.359347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:44:44.358 [2024-12-06 13:39:37.359358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:44:44.358 [2024-12-06 13:39:37.359368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:44:44.358 [2024-12-06 13:39:37.359379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:44:44.358 [2024-12-06 13:39:37.359389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:44:44.358 [2024-12-06 13:39:37.359411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:44:44.358 [2024-12-06 13:39:37.359422] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:44.358 [2024-12-06 13:39:37.359435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:44.358 [2024-12-06 13:39:37.359446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:44.358 [2024-12-06 13:39:37.359457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:44.358 [2024-12-06 13:39:37.359468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:44.358 [2024-12-06 13:39:37.359481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:44.358 [2024-12-06 13:39:37.359492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.358 [2024-12-06 13:39:37.359507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:44.358 [2024-12-06 13:39:37.359518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:44:44.358 [2024-12-06 13:39:37.359529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.358 [2024-12-06 13:39:37.409317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.358 [2024-12-06 13:39:37.409370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:44.358 [2024-12-06 13:39:37.409387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.712 ms 00:44:44.358 [2024-12-06 13:39:37.409414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.358 [2024-12-06 13:39:37.409619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.358 [2024-12-06 13:39:37.409634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:44.358 [2024-12-06 13:39:37.409647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:44:44.358 [2024-12-06 13:39:37.409657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.618 [2024-12-06 13:39:37.479261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.618 [2024-12-06 13:39:37.479322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:44.618 [2024-12-06 13:39:37.479339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.574 ms 00:44:44.618 [2024-12-06 13:39:37.479351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.618 [2024-12-06 13:39:37.479488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.618 [2024-12-06 13:39:37.479502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:44.618 [2024-12-06 13:39:37.479515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:44:44.618 [2024-12-06 13:39:37.479525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.618 [2024-12-06 13:39:37.480345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.618 [2024-12-06 13:39:37.480373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:44.618 [2024-12-06 13:39:37.480391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:44:44.618 [2024-12-06 13:39:37.480419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.618 [2024-12-06 13:39:37.480572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.618 [2024-12-06 13:39:37.480588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:44.618 [2024-12-06 13:39:37.480601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:44:44.618 [2024-12-06 13:39:37.480612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.618 [2024-12-06 13:39:37.505759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.618 [2024-12-06 13:39:37.505807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:44.618 [2024-12-06 13:39:37.505822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.118 ms 00:44:44.618 [2024-12-06 13:39:37.505834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.618 [2024-12-06 13:39:37.527178] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:44:44.618 [2024-12-06 13:39:37.527221] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:44.618 [2024-12-06 13:39:37.527238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.618 [2024-12-06 13:39:37.527250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:44.618 [2024-12-06 13:39:37.527263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.234 ms 00:44:44.618 [2024-12-06 13:39:37.527274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.618 [2024-12-06 13:39:37.558953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.618 [2024-12-06 13:39:37.558996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:44.618 [2024-12-06 13:39:37.559012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.588 ms 00:44:44.618 [2024-12-06 13:39:37.559023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.618 [2024-12-06 13:39:37.577815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.618 [2024-12-06 13:39:37.577856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:44.618 [2024-12-06 13:39:37.577871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.700 ms 00:44:44.618 [2024-12-06 13:39:37.577882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.618 [2024-12-06 13:39:37.596328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.618 [2024-12-06 13:39:37.596366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:44.618 [2024-12-06 13:39:37.596380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.362 ms 00:44:44.618 [2024-12-06 13:39:37.596390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.619 [2024-12-06 13:39:37.597241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.619 [2024-12-06 13:39:37.597275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:44.619 [2024-12-06 13:39:37.597289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:44:44.619 [2024-12-06 13:39:37.597300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.619 [2024-12-06 13:39:37.697250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.619 [2024-12-06 13:39:37.697335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:44.619 [2024-12-06 13:39:37.697356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.916 ms 00:44:44.619 [2024-12-06 13:39:37.697370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.619 [2024-12-06 13:39:37.709329] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:44.879 [2024-12-06 13:39:37.736797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.879 [2024-12-06 13:39:37.736874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:44.879 [2024-12-06 13:39:37.736910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.278 ms 00:44:44.879 [2024-12-06 13:39:37.736931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.879 [2024-12-06 13:39:37.737113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.879 [2024-12-06 13:39:37.737129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:44.879 [2024-12-06 13:39:37.737141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:44:44.879 [2024-12-06 13:39:37.737153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.879 [2024-12-06 13:39:37.737229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.879 [2024-12-06 13:39:37.737242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:44.879 [2024-12-06 13:39:37.737254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:44:44.879 [2024-12-06 13:39:37.737271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.879 [2024-12-06 13:39:37.737316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.879 [2024-12-06 13:39:37.737332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:44.879 [2024-12-06 13:39:37.737343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:44:44.879 [2024-12-06 13:39:37.737353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.879 [2024-12-06 13:39:37.737433] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:44.879 [2024-12-06 13:39:37.737449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.879 [2024-12-06 13:39:37.737461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:44.879 [2024-12-06 13:39:37.737473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:44:44.879 [2024-12-06 13:39:37.737483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.879 [2024-12-06 13:39:37.777243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.879 [2024-12-06 13:39:37.777292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:44.879 [2024-12-06 13:39:37.777309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.731 ms 00:44:44.879 [2024-12-06 13:39:37.777321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.879 [2024-12-06 13:39:37.777457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:44.879 [2024-12-06 13:39:37.777472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:44.879 [2024-12-06 13:39:37.777484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:44:44.879 [2024-12-06 13:39:37.777496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:44.879 [2024-12-06 13:39:37.778897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:44.879 [2024-12-06 13:39:37.783821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 468.521 ms, result 0 00:44:44.879 [2024-12-06 13:39:37.784689] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:44.879 [2024-12-06 13:39:37.803416] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:45.817  [2024-12-06T13:39:39.853Z] Copying: 29/256 [MB] (29 MBps) [2024-12-06T13:39:41.227Z] Copying: 58/256 [MB] (29 MBps) [2024-12-06T13:39:42.163Z] Copying: 88/256 [MB] (30 MBps) [2024-12-06T13:39:43.099Z] Copying: 118/256 [MB] (29 MBps) [2024-12-06T13:39:44.036Z] Copying: 148/256 [MB] (29 MBps) [2024-12-06T13:39:44.973Z] Copying: 177/256 [MB] (29 MBps) [2024-12-06T13:39:45.909Z] Copying: 207/256 [MB] (29 MBps) [2024-12-06T13:39:46.847Z] Copying: 236/256 [MB] (29 MBps) [2024-12-06T13:39:46.847Z] Copying: 256/256 [MB] (average 29 MBps)[2024-12-06 13:39:46.484702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:53.747 [2024-12-06 13:39:46.500608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.500653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:53.747 [2024-12-06 13:39:46.500671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:53.747 [2024-12-06 13:39:46.500706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.500730] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:53.747 [2024-12-06 13:39:46.505712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.505739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:53.747 [2024-12-06 13:39:46.505767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.964 ms 00:44:53.747 [2024-12-06 13:39:46.505777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.507727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.507764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:53.747 [2024-12-06 13:39:46.507777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.924 ms 00:44:53.747 [2024-12-06 13:39:46.507788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.514647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.514690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:53.747 [2024-12-06 13:39:46.514703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.839 ms 00:44:53.747 [2024-12-06 13:39:46.514729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.520374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.520414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:53.747 [2024-12-06 13:39:46.520426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.607 ms 00:44:53.747 [2024-12-06 13:39:46.520437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.558972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.559015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:53.747 [2024-12-06 13:39:46.559031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.479 ms 00:44:53.747 [2024-12-06 13:39:46.559042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.581361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.581409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:53.747 [2024-12-06 13:39:46.581433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.255 ms 00:44:53.747 [2024-12-06 13:39:46.581445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.581596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.581610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:53.747 [2024-12-06 13:39:46.581623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:44:53.747 [2024-12-06 13:39:46.581646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.621731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.621772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:53.747 [2024-12-06 13:39:46.621787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.051 ms 00:44:53.747 [2024-12-06 13:39:46.621797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.658393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.658436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:53.747 [2024-12-06 13:39:46.658449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.540 ms 00:44:53.747 [2024-12-06 13:39:46.658459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.694433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.694466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:53.747 [2024-12-06 13:39:46.694480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.917 ms 00:44:53.747 [2024-12-06 13:39:46.694490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.729999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.747 [2024-12-06 13:39:46.730042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:53.747 [2024-12-06 13:39:46.730056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.425 ms 00:44:53.747 [2024-12-06 13:39:46.730067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.747 [2024-12-06 13:39:46.730122] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:53.747 [2024-12-06 13:39:46.730141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:53.747 [2024-12-06 13:39:46.730381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.730992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:53.748 [2024-12-06 13:39:46.731260] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:53.748 [2024-12-06 13:39:46.731270] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 484dddcb-97b5-4cf1-8d97-550a0be11fc7 00:44:53.748 [2024-12-06 13:39:46.731282] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:53.748 [2024-12-06 13:39:46.731292] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:53.748 [2024-12-06 13:39:46.731302] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:53.748 [2024-12-06 13:39:46.731313] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:53.748 [2024-12-06 13:39:46.731324] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:53.748 [2024-12-06 13:39:46.731334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:53.748 [2024-12-06 13:39:46.731350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:53.748 [2024-12-06 13:39:46.731360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:53.748 [2024-12-06 13:39:46.731370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:53.748 [2024-12-06 13:39:46.731380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.748 [2024-12-06 13:39:46.731390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:53.748 [2024-12-06 13:39:46.731411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:44:53.748 [2024-12-06 13:39:46.731421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.749 [2024-12-06 13:39:46.752887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.749 [2024-12-06 13:39:46.752915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:53.749 [2024-12-06 13:39:46.752928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.444 ms 00:44:53.749 [2024-12-06 13:39:46.752940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.749 [2024-12-06 13:39:46.753639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:53.749 [2024-12-06 13:39:46.753659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:53.749 [2024-12-06 13:39:46.753673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:44:53.749 [2024-12-06 13:39:46.753684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.749 [2024-12-06 13:39:46.813255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.749 [2024-12-06 13:39:46.813288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:53.749 [2024-12-06 13:39:46.813317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.749 [2024-12-06 13:39:46.813334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.749 [2024-12-06 13:39:46.813432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.749 [2024-12-06 13:39:46.813445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:53.749 [2024-12-06 13:39:46.813463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.749 [2024-12-06 13:39:46.813473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.749 [2024-12-06 13:39:46.813525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.749 [2024-12-06 13:39:46.813539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:53.749 [2024-12-06 13:39:46.813550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.749 [2024-12-06 13:39:46.813561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:53.749 [2024-12-06 13:39:46.813587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:53.749 [2024-12-06 13:39:46.813598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:53.749 [2024-12-06 13:39:46.813609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:53.749 [2024-12-06 13:39:46.813619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.008 [2024-12-06 13:39:46.950955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.008 [2024-12-06 13:39:46.951031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:54.008 [2024-12-06 13:39:46.951049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.008 [2024-12-06 13:39:46.951062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.008 [2024-12-06 13:39:47.060039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.008 [2024-12-06 13:39:47.060097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:54.008 [2024-12-06 13:39:47.060130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.008 [2024-12-06 13:39:47.060143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.008 [2024-12-06 13:39:47.060268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.008 [2024-12-06 13:39:47.060283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:54.008 [2024-12-06 13:39:47.060295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.008 [2024-12-06 13:39:47.060307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.008 [2024-12-06 13:39:47.060343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.008 [2024-12-06 13:39:47.060361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:54.008 [2024-12-06 13:39:47.060374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.008 [2024-12-06 13:39:47.060385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.008 [2024-12-06 13:39:47.060536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.008 [2024-12-06 13:39:47.060556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:54.008 [2024-12-06 13:39:47.060569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.008 [2024-12-06 13:39:47.060580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.008 [2024-12-06 13:39:47.060624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.008 [2024-12-06 13:39:47.060638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:54.008 [2024-12-06 13:39:47.060655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.008 [2024-12-06 13:39:47.060667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.008 [2024-12-06 13:39:47.060719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.008 [2024-12-06 13:39:47.060731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:54.008 [2024-12-06 13:39:47.060744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.008 [2024-12-06 13:39:47.060756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.008 [2024-12-06 13:39:47.060812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.008 [2024-12-06 13:39:47.060830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:54.008 [2024-12-06 13:39:47.060843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.008 [2024-12-06 13:39:47.060855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.008 [2024-12-06 13:39:47.061032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 560.404 ms, result 0 00:44:55.386 00:44:55.386 00:44:55.386 13:39:48 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79461 00:44:55.386 13:39:48 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:44:55.386 13:39:48 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79461 00:44:55.386 13:39:48 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79461 ']' 00:44:55.386 13:39:48 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:55.386 13:39:48 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:55.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:55.386 13:39:48 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:55.386 13:39:48 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:55.386 13:39:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:44:55.645 [2024-12-06 13:39:48.543033] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:44:55.645 [2024-12-06 13:39:48.543183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79461 ] 00:44:55.645 [2024-12-06 13:39:48.717904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:55.905 [2024-12-06 13:39:48.860748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:56.892 13:39:49 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:56.892 13:39:49 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:44:56.892 13:39:49 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:44:57.151 [2024-12-06 13:39:50.179022] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:57.151 [2024-12-06 13:39:50.179097] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:57.411 [2024-12-06 13:39:50.366847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.411 [2024-12-06 13:39:50.366908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:57.411 [2024-12-06 13:39:50.366933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:44:57.411 [2024-12-06 13:39:50.366945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.411 [2024-12-06 13:39:50.371438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.411 [2024-12-06 13:39:50.371474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:57.411 [2024-12-06 13:39:50.371489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.470 ms 00:44:57.411 [2024-12-06 13:39:50.371501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.411 [2024-12-06 13:39:50.371626] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:57.411 [2024-12-06 13:39:50.372691] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:57.411 [2024-12-06 13:39:50.372723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.411 [2024-12-06 13:39:50.372735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:57.411 [2024-12-06 13:39:50.372749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:44:57.411 [2024-12-06 13:39:50.372759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.411 [2024-12-06 13:39:50.375313] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:57.411 [2024-12-06 13:39:50.396197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.411 [2024-12-06 13:39:50.396240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:57.411 [2024-12-06 13:39:50.396256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.887 ms 00:44:57.411 [2024-12-06 13:39:50.396272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.411 [2024-12-06 13:39:50.396383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.411 [2024-12-06 13:39:50.396419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:57.411 [2024-12-06 13:39:50.396433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:44:57.411 [2024-12-06 13:39:50.396449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.411 [2024-12-06 13:39:50.409084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.411 [2024-12-06 13:39:50.409127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:57.411 [2024-12-06 13:39:50.409142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.570 ms 00:44:57.411 [2024-12-06 13:39:50.409159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.411 [2024-12-06 13:39:50.409337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.411 [2024-12-06 13:39:50.409361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:57.411 [2024-12-06 13:39:50.409373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:44:57.411 [2024-12-06 13:39:50.409411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.411 [2024-12-06 13:39:50.409446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.411 [2024-12-06 13:39:50.409464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:57.411 [2024-12-06 13:39:50.409476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:57.411 [2024-12-06 13:39:50.409493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.411 [2024-12-06 13:39:50.409524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:44:57.411 [2024-12-06 13:39:50.415675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.411 [2024-12-06 13:39:50.415703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:57.411 [2024-12-06 13:39:50.415722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:44:57.411 [2024-12-06 13:39:50.415733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.411 [2024-12-06 13:39:50.415805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.411 [2024-12-06 13:39:50.415818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:57.411 [2024-12-06 13:39:50.415835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:57.411 [2024-12-06 13:39:50.415851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.411 [2024-12-06 13:39:50.415882] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:57.411 [2024-12-06 13:39:50.415915] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:57.411 [2024-12-06 13:39:50.415972] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:57.411 [2024-12-06 13:39:50.415995] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:57.411 [2024-12-06 13:39:50.416096] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:57.411 [2024-12-06 13:39:50.416111] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:57.411 [2024-12-06 13:39:50.416138] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:57.411 [2024-12-06 13:39:50.416152] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:57.411 [2024-12-06 13:39:50.416170] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:57.412 [2024-12-06 13:39:50.416183] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:44:57.412 [2024-12-06 13:39:50.416199] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:57.412 [2024-12-06 13:39:50.416210] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:57.412 [2024-12-06 13:39:50.416231] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:57.412 [2024-12-06 13:39:50.416243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.412 [2024-12-06 13:39:50.416259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:57.412 [2024-12-06 13:39:50.416270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:44:57.412 [2024-12-06 13:39:50.416286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.412 [2024-12-06 13:39:50.416371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.412 [2024-12-06 13:39:50.416389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:57.412 [2024-12-06 13:39:50.416413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:44:57.412 [2024-12-06 13:39:50.416429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.412 [2024-12-06 13:39:50.416526] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:57.412 [2024-12-06 13:39:50.416547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:57.412 [2024-12-06 13:39:50.416558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:57.412 [2024-12-06 13:39:50.416575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:57.412 [2024-12-06 13:39:50.416587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:57.412 [2024-12-06 13:39:50.416603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:57.412 [2024-12-06 13:39:50.416614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:44:57.412 [2024-12-06 13:39:50.416636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:57.412 [2024-12-06 13:39:50.416646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:44:57.412 [2024-12-06 13:39:50.416661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:57.412 [2024-12-06 13:39:50.416671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:57.412 [2024-12-06 13:39:50.416687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:44:57.412 [2024-12-06 13:39:50.416696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:57.412 [2024-12-06 13:39:50.416712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:57.412 [2024-12-06 13:39:50.416724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:44:57.412 [2024-12-06 13:39:50.416739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:57.412 [2024-12-06 13:39:50.416749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:57.412 [2024-12-06 13:39:50.416765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:44:57.412 [2024-12-06 13:39:50.416788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:57.412 [2024-12-06 13:39:50.416804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:57.412 [2024-12-06 13:39:50.416814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:44:57.412 [2024-12-06 13:39:50.416829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:57.412 [2024-12-06 13:39:50.416839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:57.412 [2024-12-06 13:39:50.416860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:44:57.412 [2024-12-06 13:39:50.416870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:57.412 [2024-12-06 13:39:50.416886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:57.412 [2024-12-06 13:39:50.416896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:44:57.412 [2024-12-06 13:39:50.416911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:57.412 [2024-12-06 13:39:50.416921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:57.412 [2024-12-06 13:39:50.416938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:44:57.412 [2024-12-06 13:39:50.416948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:57.412 [2024-12-06 13:39:50.416975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:57.412 [2024-12-06 13:39:50.416985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:44:57.412 [2024-12-06 13:39:50.417000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:57.412 [2024-12-06 13:39:50.417010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:57.412 [2024-12-06 13:39:50.417025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:44:57.412 [2024-12-06 13:39:50.417035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:57.412 [2024-12-06 13:39:50.417050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:57.412 [2024-12-06 13:39:50.417060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:44:57.412 [2024-12-06 13:39:50.417079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:57.412 [2024-12-06 13:39:50.417089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:57.412 [2024-12-06 13:39:50.417104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:44:57.412 [2024-12-06 13:39:50.417117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:57.412 [2024-12-06 13:39:50.417131] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:57.412 [2024-12-06 13:39:50.417148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:57.412 [2024-12-06 13:39:50.417163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:57.412 [2024-12-06 13:39:50.417175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:57.412 [2024-12-06 13:39:50.417192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:57.412 [2024-12-06 13:39:50.417202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:57.412 [2024-12-06 13:39:50.417217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:57.412 [2024-12-06 13:39:50.417227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:57.412 [2024-12-06 13:39:50.417243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:57.412 [2024-12-06 13:39:50.417253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:57.412 [2024-12-06 13:39:50.417270] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:57.412 [2024-12-06 13:39:50.417283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:57.412 [2024-12-06 13:39:50.417308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:44:57.412 [2024-12-06 13:39:50.417319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:44:57.412 [2024-12-06 13:39:50.417336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:44:57.412 [2024-12-06 13:39:50.417348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:44:57.412 [2024-12-06 13:39:50.417364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:44:57.412 [2024-12-06 13:39:50.417375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:44:57.412 [2024-12-06 13:39:50.417391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:44:57.412 [2024-12-06 13:39:50.417412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:44:57.412 [2024-12-06 13:39:50.417428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:44:57.412 [2024-12-06 13:39:50.417439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:44:57.412 [2024-12-06 13:39:50.417456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:44:57.412 [2024-12-06 13:39:50.417466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:44:57.412 [2024-12-06 13:39:50.417485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:44:57.412 [2024-12-06 13:39:50.417496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:44:57.412 [2024-12-06 13:39:50.417513] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:57.412 [2024-12-06 13:39:50.417525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:57.412 [2024-12-06 13:39:50.417548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:57.412 [2024-12-06 13:39:50.417559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:57.412 [2024-12-06 13:39:50.417575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:57.412 [2024-12-06 13:39:50.417587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:57.412 [2024-12-06 13:39:50.417604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.412 [2024-12-06 13:39:50.417615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:57.412 [2024-12-06 13:39:50.417632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:44:57.412 [2024-12-06 13:39:50.417650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.412 [2024-12-06 13:39:50.470147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.412 [2024-12-06 13:39:50.470189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:57.412 [2024-12-06 13:39:50.470211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.417 ms 00:44:57.412 [2024-12-06 13:39:50.470229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.412 [2024-12-06 13:39:50.470417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.412 [2024-12-06 13:39:50.470432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:57.413 [2024-12-06 13:39:50.470450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:44:57.413 [2024-12-06 13:39:50.470461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.528641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.673 [2024-12-06 13:39:50.528679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:57.673 [2024-12-06 13:39:50.528699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.141 ms 00:44:57.673 [2024-12-06 13:39:50.528711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.528812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.673 [2024-12-06 13:39:50.528825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:57.673 [2024-12-06 13:39:50.528842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:44:57.673 [2024-12-06 13:39:50.528853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.529662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.673 [2024-12-06 13:39:50.529686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:57.673 [2024-12-06 13:39:50.529703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:44:57.673 [2024-12-06 13:39:50.529713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.529858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.673 [2024-12-06 13:39:50.529872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:57.673 [2024-12-06 13:39:50.529889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:44:57.673 [2024-12-06 13:39:50.529900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.558304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.673 [2024-12-06 13:39:50.558339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:57.673 [2024-12-06 13:39:50.558377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.369 ms 00:44:57.673 [2024-12-06 13:39:50.558389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.588390] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:44:57.673 [2024-12-06 13:39:50.588430] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:57.673 [2024-12-06 13:39:50.588453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.673 [2024-12-06 13:39:50.588466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:57.673 [2024-12-06 13:39:50.588484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.902 ms 00:44:57.673 [2024-12-06 13:39:50.588509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.619221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.673 [2024-12-06 13:39:50.619256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:57.673 [2024-12-06 13:39:50.619293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.618 ms 00:44:57.673 [2024-12-06 13:39:50.619304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.638108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.673 [2024-12-06 13:39:50.638140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:57.673 [2024-12-06 13:39:50.638176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.721 ms 00:44:57.673 [2024-12-06 13:39:50.638186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.656856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.673 [2024-12-06 13:39:50.656888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:57.673 [2024-12-06 13:39:50.656904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.588 ms 00:44:57.673 [2024-12-06 13:39:50.656914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.657704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.673 [2024-12-06 13:39:50.657725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:57.673 [2024-12-06 13:39:50.657740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:44:57.673 [2024-12-06 13:39:50.657751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.757981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.673 [2024-12-06 13:39:50.758048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:57.673 [2024-12-06 13:39:50.758074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.190 ms 00:44:57.673 [2024-12-06 13:39:50.758087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.673 [2024-12-06 13:39:50.770105] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:44:57.933 [2024-12-06 13:39:50.797234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.933 [2024-12-06 13:39:50.797327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:57.933 [2024-12-06 13:39:50.797353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.012 ms 00:44:57.933 [2024-12-06 13:39:50.797371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.933 [2024-12-06 13:39:50.797553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.933 [2024-12-06 13:39:50.797576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:57.933 [2024-12-06 13:39:50.797589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:57.933 [2024-12-06 13:39:50.797606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.933 [2024-12-06 13:39:50.797681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.933 [2024-12-06 13:39:50.797700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:57.933 [2024-12-06 13:39:50.797712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:44:57.933 [2024-12-06 13:39:50.797735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.933 [2024-12-06 13:39:50.797763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.933 [2024-12-06 13:39:50.797781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:57.933 [2024-12-06 13:39:50.797793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:57.933 [2024-12-06 13:39:50.797809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.933 [2024-12-06 13:39:50.797860] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:57.933 [2024-12-06 13:39:50.797884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.933 [2024-12-06 13:39:50.797904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:57.933 [2024-12-06 13:39:50.797921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:44:57.933 [2024-12-06 13:39:50.797931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.933 [2024-12-06 13:39:50.836018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.933 [2024-12-06 13:39:50.836056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:57.933 [2024-12-06 13:39:50.836077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.043 ms 00:44:57.933 [2024-12-06 13:39:50.836089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.933 [2024-12-06 13:39:50.836212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.933 [2024-12-06 13:39:50.836226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:57.933 [2024-12-06 13:39:50.836244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:44:57.934 [2024-12-06 13:39:50.836260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.934 [2024-12-06 13:39:50.837692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:57.934 [2024-12-06 13:39:50.842187] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 470.433 ms, result 0 00:44:57.934 [2024-12-06 13:39:50.843512] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:57.934 Some configs were skipped because the RPC state that can call them passed over. 00:44:57.934 13:39:50 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:44:58.193 [2024-12-06 13:39:51.143791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:58.193 [2024-12-06 13:39:51.143869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:44:58.193 [2024-12-06 13:39:51.143888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:44:58.193 [2024-12-06 13:39:51.143905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:58.193 [2024-12-06 13:39:51.143950] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.650 ms, result 0 00:44:58.193 true 00:44:58.193 13:39:51 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:44:58.454 [2024-12-06 13:39:51.419756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:58.454 [2024-12-06 13:39:51.419839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:44:58.454 [2024-12-06 13:39:51.419866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:44:58.454 [2024-12-06 13:39:51.419880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:58.454 [2024-12-06 13:39:51.419938] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.213 ms, result 0 00:44:58.454 true 00:44:58.454 13:39:51 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79461 00:44:58.454 13:39:51 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79461 ']' 00:44:58.454 13:39:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79461 00:44:58.454 13:39:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:44:58.454 13:39:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:58.454 13:39:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79461 00:44:58.454 killing process with pid 79461 00:44:58.454 13:39:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:58.454 13:39:51 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:58.454 13:39:51 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79461' 00:44:58.454 13:39:51 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79461 00:44:58.454 13:39:51 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79461 00:44:59.834 [2024-12-06 13:39:52.802004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.802084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:59.834 [2024-12-06 13:39:52.802102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:59.834 [2024-12-06 13:39:52.802116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.802147] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:44:59.834 [2024-12-06 13:39:52.807247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.807282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:59.834 [2024-12-06 13:39:52.807301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.077 ms 00:44:59.834 [2024-12-06 13:39:52.807311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.807636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.807651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:59.834 [2024-12-06 13:39:52.807664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:44:59.834 [2024-12-06 13:39:52.807674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.810979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.811016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:59.834 [2024-12-06 13:39:52.811036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.280 ms 00:44:59.834 [2024-12-06 13:39:52.811047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.817147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.817184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:59.834 [2024-12-06 13:39:52.817203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.058 ms 00:44:59.834 [2024-12-06 13:39:52.817213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.832858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.832903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:59.834 [2024-12-06 13:39:52.832922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.584 ms 00:44:59.834 [2024-12-06 13:39:52.832933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.844413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.844462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:59.834 [2024-12-06 13:39:52.844480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.407 ms 00:44:59.834 [2024-12-06 13:39:52.844492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.844636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.844651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:59.834 [2024-12-06 13:39:52.844666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:44:59.834 [2024-12-06 13:39:52.844677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.861115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.861150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:59.834 [2024-12-06 13:39:52.861171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.403 ms 00:44:59.834 [2024-12-06 13:39:52.861182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.876850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.876884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:59.834 [2024-12-06 13:39:52.876912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.605 ms 00:44:59.834 [2024-12-06 13:39:52.876922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.892274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.892309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:59.834 [2024-12-06 13:39:52.892330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.272 ms 00:44:59.834 [2024-12-06 13:39:52.892341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.907775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.834 [2024-12-06 13:39:52.907820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:59.834 [2024-12-06 13:39:52.907855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.314 ms 00:44:59.834 [2024-12-06 13:39:52.907866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:59.834 [2024-12-06 13:39:52.907925] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:59.834 [2024-12-06 13:39:52.907946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.907966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.907979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.907997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:59.834 [2024-12-06 13:39:52.908678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.908992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:59.835 [2024-12-06 13:39:52.909467] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:59.835 [2024-12-06 13:39:52.909488] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 484dddcb-97b5-4cf1-8d97-550a0be11fc7 00:44:59.835 [2024-12-06 13:39:52.909505] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:59.835 [2024-12-06 13:39:52.909519] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:59.835 [2024-12-06 13:39:52.909529] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:59.835 [2024-12-06 13:39:52.909543] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:59.835 [2024-12-06 13:39:52.909553] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:59.835 [2024-12-06 13:39:52.909567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:59.835 [2024-12-06 13:39:52.909577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:59.835 [2024-12-06 13:39:52.909590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:59.835 [2024-12-06 13:39:52.909599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:59.835 [2024-12-06 13:39:52.909612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:59.835 [2024-12-06 13:39:52.909623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:59.835 [2024-12-06 13:39:52.909637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:44:59.835 [2024-12-06 13:39:52.909648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.094 [2024-12-06 13:39:52.932281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:00.094 [2024-12-06 13:39:52.932331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:00.094 [2024-12-06 13:39:52.932353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.600 ms 00:45:00.094 [2024-12-06 13:39:52.932365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.094 [2024-12-06 13:39:52.933036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:00.094 [2024-12-06 13:39:52.933067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:00.094 [2024-12-06 13:39:52.933088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:45:00.094 [2024-12-06 13:39:52.933099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.094 [2024-12-06 13:39:53.010851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.095 [2024-12-06 13:39:53.010896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:00.095 [2024-12-06 13:39:53.010914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.095 [2024-12-06 13:39:53.010926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.095 [2024-12-06 13:39:53.011069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.095 [2024-12-06 13:39:53.011082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:00.095 [2024-12-06 13:39:53.011102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.095 [2024-12-06 13:39:53.011113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.095 [2024-12-06 13:39:53.011175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.095 [2024-12-06 13:39:53.011188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:00.095 [2024-12-06 13:39:53.011207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.095 [2024-12-06 13:39:53.011218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.095 [2024-12-06 13:39:53.011243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.095 [2024-12-06 13:39:53.011254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:00.095 [2024-12-06 13:39:53.011268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.095 [2024-12-06 13:39:53.011282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.095 [2024-12-06 13:39:53.156055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.095 [2024-12-06 13:39:53.156137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:00.095 [2024-12-06 13:39:53.156163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.095 [2024-12-06 13:39:53.156176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.353 [2024-12-06 13:39:53.271768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.353 [2024-12-06 13:39:53.271832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:00.353 [2024-12-06 13:39:53.271855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.353 [2024-12-06 13:39:53.271874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.353 [2024-12-06 13:39:53.272018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.353 [2024-12-06 13:39:53.272032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:00.353 [2024-12-06 13:39:53.272057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.353 [2024-12-06 13:39:53.272068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.353 [2024-12-06 13:39:53.272108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.353 [2024-12-06 13:39:53.272120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:00.353 [2024-12-06 13:39:53.272137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.353 [2024-12-06 13:39:53.272149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.353 [2024-12-06 13:39:53.272294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.353 [2024-12-06 13:39:53.272308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:00.353 [2024-12-06 13:39:53.272325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.353 [2024-12-06 13:39:53.272336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.353 [2024-12-06 13:39:53.272387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.353 [2024-12-06 13:39:53.272418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:00.353 [2024-12-06 13:39:53.272436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.353 [2024-12-06 13:39:53.272447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.353 [2024-12-06 13:39:53.272509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.353 [2024-12-06 13:39:53.272521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:00.353 [2024-12-06 13:39:53.272543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.353 [2024-12-06 13:39:53.272554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.353 [2024-12-06 13:39:53.272616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:00.353 [2024-12-06 13:39:53.272629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:00.353 [2024-12-06 13:39:53.272646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:00.353 [2024-12-06 13:39:53.272657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:00.353 [2024-12-06 13:39:53.272845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 470.797 ms, result 0 00:45:01.730 13:39:54 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:45:01.730 13:39:54 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:01.730 [2024-12-06 13:39:54.566835] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:45:01.730 [2024-12-06 13:39:54.567022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79536 ] 00:45:01.730 [2024-12-06 13:39:54.747674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:01.988 [2024-12-06 13:39:54.893489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:02.246 [2024-12-06 13:39:55.339056] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:02.246 [2024-12-06 13:39:55.339140] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:02.505 [2024-12-06 13:39:55.508147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.508206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:02.505 [2024-12-06 13:39:55.508224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:02.505 [2024-12-06 13:39:55.508237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.511792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.511829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:02.505 [2024-12-06 13:39:55.511842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.533 ms 00:45:02.505 [2024-12-06 13:39:55.511853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.511959] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:02.505 [2024-12-06 13:39:55.512943] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:02.505 [2024-12-06 13:39:55.512979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.512990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:02.505 [2024-12-06 13:39:55.513002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:45:02.505 [2024-12-06 13:39:55.513014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.515752] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:02.505 [2024-12-06 13:39:55.536052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.536092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:02.505 [2024-12-06 13:39:55.536109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.301 ms 00:45:02.505 [2024-12-06 13:39:55.536121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.536231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.536246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:02.505 [2024-12-06 13:39:55.536259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:45:02.505 [2024-12-06 13:39:55.536269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.549242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.549274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:02.505 [2024-12-06 13:39:55.549304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.925 ms 00:45:02.505 [2024-12-06 13:39:55.549315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.549466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.549484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:02.505 [2024-12-06 13:39:55.549496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:45:02.505 [2024-12-06 13:39:55.549507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.549545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.549559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:02.505 [2024-12-06 13:39:55.549569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:45:02.505 [2024-12-06 13:39:55.549580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.549608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:45:02.505 [2024-12-06 13:39:55.555790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.555822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:02.505 [2024-12-06 13:39:55.555836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:45:02.505 [2024-12-06 13:39:55.555847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.555907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.555921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:02.505 [2024-12-06 13:39:55.555932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:45:02.505 [2024-12-06 13:39:55.555943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.555971] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:02.505 [2024-12-06 13:39:55.556000] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:02.505 [2024-12-06 13:39:55.556037] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:02.505 [2024-12-06 13:39:55.556058] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:02.505 [2024-12-06 13:39:55.556153] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:02.505 [2024-12-06 13:39:55.556168] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:02.505 [2024-12-06 13:39:55.556182] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:02.505 [2024-12-06 13:39:55.556200] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:02.505 [2024-12-06 13:39:55.556212] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:02.505 [2024-12-06 13:39:55.556224] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:45:02.505 [2024-12-06 13:39:55.556235] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:02.505 [2024-12-06 13:39:55.556246] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:02.505 [2024-12-06 13:39:55.556257] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:02.505 [2024-12-06 13:39:55.556268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.556278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:02.505 [2024-12-06 13:39:55.556289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:45:02.505 [2024-12-06 13:39:55.556299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.556380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.505 [2024-12-06 13:39:55.556406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:02.505 [2024-12-06 13:39:55.556417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:45:02.505 [2024-12-06 13:39:55.556427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.505 [2024-12-06 13:39:55.556523] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:02.505 [2024-12-06 13:39:55.556537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:02.505 [2024-12-06 13:39:55.556548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:02.505 [2024-12-06 13:39:55.556560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:02.505 [2024-12-06 13:39:55.556573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:02.505 [2024-12-06 13:39:55.556583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:02.505 [2024-12-06 13:39:55.556593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:45:02.505 [2024-12-06 13:39:55.556603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:02.506 [2024-12-06 13:39:55.556612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:45:02.506 [2024-12-06 13:39:55.556622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:02.506 [2024-12-06 13:39:55.556633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:02.506 [2024-12-06 13:39:55.556655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:45:02.506 [2024-12-06 13:39:55.556665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:02.506 [2024-12-06 13:39:55.556675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:02.506 [2024-12-06 13:39:55.556685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:45:02.506 [2024-12-06 13:39:55.556695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:02.506 [2024-12-06 13:39:55.556705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:02.506 [2024-12-06 13:39:55.556714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:45:02.506 [2024-12-06 13:39:55.556724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:02.506 [2024-12-06 13:39:55.556734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:02.506 [2024-12-06 13:39:55.556744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:45:02.506 [2024-12-06 13:39:55.556753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:02.506 [2024-12-06 13:39:55.556763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:02.506 [2024-12-06 13:39:55.556773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:45:02.506 [2024-12-06 13:39:55.556782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:02.506 [2024-12-06 13:39:55.556791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:02.506 [2024-12-06 13:39:55.556801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:45:02.506 [2024-12-06 13:39:55.556811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:02.506 [2024-12-06 13:39:55.556820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:02.506 [2024-12-06 13:39:55.556829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:45:02.506 [2024-12-06 13:39:55.556838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:02.506 [2024-12-06 13:39:55.556847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:02.506 [2024-12-06 13:39:55.556856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:45:02.506 [2024-12-06 13:39:55.556865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:02.506 [2024-12-06 13:39:55.556874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:02.506 [2024-12-06 13:39:55.556883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:45:02.506 [2024-12-06 13:39:55.556893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:02.506 [2024-12-06 13:39:55.556903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:02.506 [2024-12-06 13:39:55.556912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:45:02.506 [2024-12-06 13:39:55.556921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:02.506 [2024-12-06 13:39:55.556930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:02.506 [2024-12-06 13:39:55.556940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:45:02.506 [2024-12-06 13:39:55.556949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:02.506 [2024-12-06 13:39:55.556959] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:02.506 [2024-12-06 13:39:55.556970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:02.506 [2024-12-06 13:39:55.556984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:02.506 [2024-12-06 13:39:55.556995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:02.506 [2024-12-06 13:39:55.557006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:02.506 [2024-12-06 13:39:55.557016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:02.506 [2024-12-06 13:39:55.557026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:02.506 [2024-12-06 13:39:55.557036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:02.506 [2024-12-06 13:39:55.557045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:02.506 [2024-12-06 13:39:55.557054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:02.506 [2024-12-06 13:39:55.557065] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:02.506 [2024-12-06 13:39:55.557078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:02.506 [2024-12-06 13:39:55.557090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:45:02.506 [2024-12-06 13:39:55.557100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:45:02.506 [2024-12-06 13:39:55.557110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:45:02.506 [2024-12-06 13:39:55.557120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:45:02.506 [2024-12-06 13:39:55.557130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:45:02.506 [2024-12-06 13:39:55.557141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:45:02.506 [2024-12-06 13:39:55.557151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:45:02.506 [2024-12-06 13:39:55.557161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:45:02.506 [2024-12-06 13:39:55.557172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:45:02.506 [2024-12-06 13:39:55.557182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:45:02.506 [2024-12-06 13:39:55.557193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:45:02.506 [2024-12-06 13:39:55.557203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:45:02.506 [2024-12-06 13:39:55.557214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:45:02.506 [2024-12-06 13:39:55.557224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:45:02.506 [2024-12-06 13:39:55.557235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:02.506 [2024-12-06 13:39:55.557247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:02.506 [2024-12-06 13:39:55.557258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:02.506 [2024-12-06 13:39:55.557269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:02.506 [2024-12-06 13:39:55.557284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:02.506 [2024-12-06 13:39:55.557295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:02.506 [2024-12-06 13:39:55.557306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.506 [2024-12-06 13:39:55.557322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:02.506 [2024-12-06 13:39:55.557333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:45:02.506 [2024-12-06 13:39:55.557343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.765 [2024-12-06 13:39:55.608861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.765 [2024-12-06 13:39:55.608921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:02.765 [2024-12-06 13:39:55.608939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.436 ms 00:45:02.765 [2024-12-06 13:39:55.608951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.765 [2024-12-06 13:39:55.609160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.765 [2024-12-06 13:39:55.609174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:02.765 [2024-12-06 13:39:55.609186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:45:02.765 [2024-12-06 13:39:55.609197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.765 [2024-12-06 13:39:55.678425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.765 [2024-12-06 13:39:55.678500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:02.766 [2024-12-06 13:39:55.678517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.198 ms 00:45:02.766 [2024-12-06 13:39:55.678529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.766 [2024-12-06 13:39:55.678657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.766 [2024-12-06 13:39:55.678671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:02.766 [2024-12-06 13:39:55.678683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:02.766 [2024-12-06 13:39:55.678694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.766 [2024-12-06 13:39:55.679478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.766 [2024-12-06 13:39:55.679513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:02.766 [2024-12-06 13:39:55.679530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:45:02.766 [2024-12-06 13:39:55.679541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.766 [2024-12-06 13:39:55.679692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.766 [2024-12-06 13:39:55.679706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:02.766 [2024-12-06 13:39:55.679718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:45:02.766 [2024-12-06 13:39:55.679728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.766 [2024-12-06 13:39:55.705707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.766 [2024-12-06 13:39:55.705775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:02.766 [2024-12-06 13:39:55.705793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.953 ms 00:45:02.766 [2024-12-06 13:39:55.705807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.766 [2024-12-06 13:39:55.727724] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:45:02.766 [2024-12-06 13:39:55.727761] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:02.766 [2024-12-06 13:39:55.727778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.766 [2024-12-06 13:39:55.727791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:02.766 [2024-12-06 13:39:55.727803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.799 ms 00:45:02.766 [2024-12-06 13:39:55.727814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.766 [2024-12-06 13:39:55.759867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.766 [2024-12-06 13:39:55.759909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:02.766 [2024-12-06 13:39:55.759925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.965 ms 00:45:02.766 [2024-12-06 13:39:55.759936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.766 [2024-12-06 13:39:55.779489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.766 [2024-12-06 13:39:55.779546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:02.766 [2024-12-06 13:39:55.779577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.463 ms 00:45:02.766 [2024-12-06 13:39:55.779588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.766 [2024-12-06 13:39:55.798802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.766 [2024-12-06 13:39:55.798837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:02.766 [2024-12-06 13:39:55.798850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.121 ms 00:45:02.766 [2024-12-06 13:39:55.798861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:02.766 [2024-12-06 13:39:55.799754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:02.766 [2024-12-06 13:39:55.799786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:02.766 [2024-12-06 13:39:55.799807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:45:02.766 [2024-12-06 13:39:55.799818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.025 [2024-12-06 13:39:55.903627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.025 [2024-12-06 13:39:55.903698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:03.025 [2024-12-06 13:39:55.903718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.773 ms 00:45:03.025 [2024-12-06 13:39:55.903731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.025 [2024-12-06 13:39:55.916369] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:45:03.025 [2024-12-06 13:39:55.944506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.025 [2024-12-06 13:39:55.944574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:03.025 [2024-12-06 13:39:55.944593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.606 ms 00:45:03.025 [2024-12-06 13:39:55.944613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.025 [2024-12-06 13:39:55.944798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.025 [2024-12-06 13:39:55.944815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:03.025 [2024-12-06 13:39:55.944827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:03.025 [2024-12-06 13:39:55.944839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.025 [2024-12-06 13:39:55.944915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.025 [2024-12-06 13:39:55.944927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:03.025 [2024-12-06 13:39:55.944940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:45:03.025 [2024-12-06 13:39:55.944957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.025 [2024-12-06 13:39:55.945000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.025 [2024-12-06 13:39:55.945014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:03.025 [2024-12-06 13:39:55.945026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:45:03.025 [2024-12-06 13:39:55.945036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.025 [2024-12-06 13:39:55.945079] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:03.025 [2024-12-06 13:39:55.945091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.025 [2024-12-06 13:39:55.945102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:03.025 [2024-12-06 13:39:55.945112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:45:03.025 [2024-12-06 13:39:55.945123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.025 [2024-12-06 13:39:55.985214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.025 [2024-12-06 13:39:55.985281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:03.025 [2024-12-06 13:39:55.985297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.062 ms 00:45:03.025 [2024-12-06 13:39:55.985309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.025 [2024-12-06 13:39:55.985451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:03.025 [2024-12-06 13:39:55.985466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:03.025 [2024-12-06 13:39:55.985479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:45:03.025 [2024-12-06 13:39:55.985490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:03.025 [2024-12-06 13:39:55.986872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:03.025 [2024-12-06 13:39:55.991681] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 478.355 ms, result 0 00:45:03.025 [2024-12-06 13:39:55.992515] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:03.025 [2024-12-06 13:39:56.011609] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:03.962  [2024-12-06T13:39:58.441Z] Copying: 33/256 [MB] (33 MBps) [2024-12-06T13:39:59.378Z] Copying: 62/256 [MB] (29 MBps) [2024-12-06T13:40:00.315Z] Copying: 93/256 [MB] (30 MBps) [2024-12-06T13:40:01.252Z] Copying: 122/256 [MB] (29 MBps) [2024-12-06T13:40:02.187Z] Copying: 151/256 [MB] (29 MBps) [2024-12-06T13:40:03.125Z] Copying: 180/256 [MB] (28 MBps) [2024-12-06T13:40:04.062Z] Copying: 210/256 [MB] (29 MBps) [2024-12-06T13:40:04.631Z] Copying: 239/256 [MB] (29 MBps) [2024-12-06T13:40:04.631Z] Copying: 256/256 [MB] (average 29 MBps)[2024-12-06 13:40:04.607385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:11.531 [2024-12-06 13:40:04.624168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.531 [2024-12-06 13:40:04.624222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:11.531 [2024-12-06 13:40:04.624254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:11.531 [2024-12-06 13:40:04.624266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.531 [2024-12-06 13:40:04.624292] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:45:11.531 [2024-12-06 13:40:04.629417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.531 [2024-12-06 13:40:04.629448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:11.531 [2024-12-06 13:40:04.629461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.107 ms 00:45:11.531 [2024-12-06 13:40:04.629472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.791 [2024-12-06 13:40:04.629723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.791 [2024-12-06 13:40:04.629738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:11.791 [2024-12-06 13:40:04.629750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:45:11.791 [2024-12-06 13:40:04.629761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.791 [2024-12-06 13:40:04.632888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.791 [2024-12-06 13:40:04.632915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:11.791 [2024-12-06 13:40:04.632928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.101 ms 00:45:11.791 [2024-12-06 13:40:04.632939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.791 [2024-12-06 13:40:04.638925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.791 [2024-12-06 13:40:04.638956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:11.791 [2024-12-06 13:40:04.638969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.964 ms 00:45:11.791 [2024-12-06 13:40:04.638980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.791 [2024-12-06 13:40:04.676569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.791 [2024-12-06 13:40:04.676608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:11.791 [2024-12-06 13:40:04.676622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.509 ms 00:45:11.791 [2024-12-06 13:40:04.676633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.791 [2024-12-06 13:40:04.698644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.791 [2024-12-06 13:40:04.698682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:11.791 [2024-12-06 13:40:04.698708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.952 ms 00:45:11.791 [2024-12-06 13:40:04.698720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.791 [2024-12-06 13:40:04.698867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.791 [2024-12-06 13:40:04.698881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:11.791 [2024-12-06 13:40:04.698909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:45:11.791 [2024-12-06 13:40:04.698919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.791 [2024-12-06 13:40:04.737764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.791 [2024-12-06 13:40:04.737801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:11.791 [2024-12-06 13:40:04.737814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.824 ms 00:45:11.791 [2024-12-06 13:40:04.737825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.791 [2024-12-06 13:40:04.775189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.791 [2024-12-06 13:40:04.775227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:11.791 [2024-12-06 13:40:04.775241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.307 ms 00:45:11.791 [2024-12-06 13:40:04.775252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.791 [2024-12-06 13:40:04.812827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.791 [2024-12-06 13:40:04.812871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:11.791 [2024-12-06 13:40:04.812891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.499 ms 00:45:11.791 [2024-12-06 13:40:04.812902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.791 [2024-12-06 13:40:04.849698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.791 [2024-12-06 13:40:04.849735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:11.791 [2024-12-06 13:40:04.849748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.692 ms 00:45:11.791 [2024-12-06 13:40:04.849760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.791 [2024-12-06 13:40:04.849817] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:11.791 [2024-12-06 13:40:04.849837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.849989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.850000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.850011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.850022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.850032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.850043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.850054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.850064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.850075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.850088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:11.791 [2024-12-06 13:40:04.850098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:11.792 [2024-12-06 13:40:04.850983] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:11.792 [2024-12-06 13:40:04.850994] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 484dddcb-97b5-4cf1-8d97-550a0be11fc7 00:45:11.792 [2024-12-06 13:40:04.851005] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:11.792 [2024-12-06 13:40:04.851016] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:11.792 [2024-12-06 13:40:04.851027] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:11.792 [2024-12-06 13:40:04.851038] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:11.792 [2024-12-06 13:40:04.851048] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:11.792 [2024-12-06 13:40:04.851060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:11.792 [2024-12-06 13:40:04.851078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:11.792 [2024-12-06 13:40:04.851088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:11.792 [2024-12-06 13:40:04.851097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:11.792 [2024-12-06 13:40:04.851107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.792 [2024-12-06 13:40:04.851118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:11.792 [2024-12-06 13:40:04.851130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:45:11.792 [2024-12-06 13:40:04.851140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.792 [2024-12-06 13:40:04.872793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.793 [2024-12-06 13:40:04.872829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:11.793 [2024-12-06 13:40:04.872843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.630 ms 00:45:11.793 [2024-12-06 13:40:04.872855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:11.793 [2024-12-06 13:40:04.873548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:11.793 [2024-12-06 13:40:04.873572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:11.793 [2024-12-06 13:40:04.873584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:45:11.793 [2024-12-06 13:40:04.873595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.050 [2024-12-06 13:40:04.934961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.050 [2024-12-06 13:40:04.935000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:12.050 [2024-12-06 13:40:04.935014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.051 [2024-12-06 13:40:04.935032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.051 [2024-12-06 13:40:04.935143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.051 [2024-12-06 13:40:04.935157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:12.051 [2024-12-06 13:40:04.935168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.051 [2024-12-06 13:40:04.935179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.051 [2024-12-06 13:40:04.935241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.051 [2024-12-06 13:40:04.935254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:12.051 [2024-12-06 13:40:04.935266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.051 [2024-12-06 13:40:04.935277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.051 [2024-12-06 13:40:04.935304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.051 [2024-12-06 13:40:04.935316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:12.051 [2024-12-06 13:40:04.935327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.051 [2024-12-06 13:40:04.935339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.051 [2024-12-06 13:40:05.080464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.051 [2024-12-06 13:40:05.080537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:12.051 [2024-12-06 13:40:05.080554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.051 [2024-12-06 13:40:05.080567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.308 [2024-12-06 13:40:05.193159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.308 [2024-12-06 13:40:05.193231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:12.308 [2024-12-06 13:40:05.193248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.308 [2024-12-06 13:40:05.193261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.308 [2024-12-06 13:40:05.193381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.308 [2024-12-06 13:40:05.193409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:12.308 [2024-12-06 13:40:05.193422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.308 [2024-12-06 13:40:05.193433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.308 [2024-12-06 13:40:05.193466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.308 [2024-12-06 13:40:05.193491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:12.308 [2024-12-06 13:40:05.193503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.308 [2024-12-06 13:40:05.193513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.308 [2024-12-06 13:40:05.193650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.308 [2024-12-06 13:40:05.193664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:12.308 [2024-12-06 13:40:05.193676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.308 [2024-12-06 13:40:05.193687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.308 [2024-12-06 13:40:05.193732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.308 [2024-12-06 13:40:05.193746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:12.308 [2024-12-06 13:40:05.193767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.308 [2024-12-06 13:40:05.193778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.308 [2024-12-06 13:40:05.193826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.308 [2024-12-06 13:40:05.193838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:12.308 [2024-12-06 13:40:05.193849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.308 [2024-12-06 13:40:05.193861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.308 [2024-12-06 13:40:05.193915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:12.308 [2024-12-06 13:40:05.193936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:12.308 [2024-12-06 13:40:05.193947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:12.308 [2024-12-06 13:40:05.193958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:12.308 [2024-12-06 13:40:05.194140] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 569.959 ms, result 0 00:45:13.686 00:45:13.686 00:45:13.686 13:40:06 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:45:13.686 13:40:06 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:45:14.253 13:40:07 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:14.253 [2024-12-06 13:40:07.270724] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:45:14.253 [2024-12-06 13:40:07.270924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79670 ] 00:45:14.513 [2024-12-06 13:40:07.473381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:14.772 [2024-12-06 13:40:07.643431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:15.343 [2024-12-06 13:40:08.168741] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:15.343 [2024-12-06 13:40:08.168832] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:15.343 [2024-12-06 13:40:08.343778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.343 [2024-12-06 13:40:08.343862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:15.343 [2024-12-06 13:40:08.343901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:15.343 [2024-12-06 13:40:08.343919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.343 [2024-12-06 13:40:08.348306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.343 [2024-12-06 13:40:08.348360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:15.343 [2024-12-06 13:40:08.348385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.344 ms 00:45:15.343 [2024-12-06 13:40:08.348418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.343 [2024-12-06 13:40:08.348778] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:15.343 [2024-12-06 13:40:08.350003] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:15.343 [2024-12-06 13:40:08.350040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.343 [2024-12-06 13:40:08.350057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:15.343 [2024-12-06 13:40:08.350073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:45:15.343 [2024-12-06 13:40:08.350089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.343 [2024-12-06 13:40:08.352927] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:15.343 [2024-12-06 13:40:08.376015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.343 [2024-12-06 13:40:08.376063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:15.343 [2024-12-06 13:40:08.376085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.090 ms 00:45:15.343 [2024-12-06 13:40:08.376102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.343 [2024-12-06 13:40:08.376239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.343 [2024-12-06 13:40:08.376260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:15.343 [2024-12-06 13:40:08.376278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:45:15.343 [2024-12-06 13:40:08.376294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.343 [2024-12-06 13:40:08.389791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.343 [2024-12-06 13:40:08.389836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:15.343 [2024-12-06 13:40:08.389856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.435 ms 00:45:15.343 [2024-12-06 13:40:08.389874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.343 [2024-12-06 13:40:08.390051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.343 [2024-12-06 13:40:08.390072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:15.343 [2024-12-06 13:40:08.390089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:45:15.343 [2024-12-06 13:40:08.390105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.343 [2024-12-06 13:40:08.390151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.343 [2024-12-06 13:40:08.390167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:15.343 [2024-12-06 13:40:08.390183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:15.343 [2024-12-06 13:40:08.390200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.343 [2024-12-06 13:40:08.390235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:45:15.343 [2024-12-06 13:40:08.397265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.343 [2024-12-06 13:40:08.397307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:15.343 [2024-12-06 13:40:08.397325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.038 ms 00:45:15.343 [2024-12-06 13:40:08.397341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.343 [2024-12-06 13:40:08.397427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.343 [2024-12-06 13:40:08.397446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:15.343 [2024-12-06 13:40:08.397463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:15.343 [2024-12-06 13:40:08.397479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.343 [2024-12-06 13:40:08.397518] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:15.343 [2024-12-06 13:40:08.397552] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:15.343 [2024-12-06 13:40:08.397601] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:15.344 [2024-12-06 13:40:08.397629] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:15.344 [2024-12-06 13:40:08.397742] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:15.344 [2024-12-06 13:40:08.397762] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:15.344 [2024-12-06 13:40:08.397782] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:15.344 [2024-12-06 13:40:08.397806] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:15.344 [2024-12-06 13:40:08.397824] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:15.344 [2024-12-06 13:40:08.397842] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:45:15.344 [2024-12-06 13:40:08.397862] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:15.344 [2024-12-06 13:40:08.397882] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:15.344 [2024-12-06 13:40:08.397903] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:15.344 [2024-12-06 13:40:08.397923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.344 [2024-12-06 13:40:08.397944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:15.344 [2024-12-06 13:40:08.397961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:45:15.344 [2024-12-06 13:40:08.397976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.344 [2024-12-06 13:40:08.398092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.344 [2024-12-06 13:40:08.398122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:15.344 [2024-12-06 13:40:08.398141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:45:15.344 [2024-12-06 13:40:08.398162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.344 [2024-12-06 13:40:08.398293] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:15.344 [2024-12-06 13:40:08.398315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:15.344 [2024-12-06 13:40:08.398336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:15.344 [2024-12-06 13:40:08.398355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:15.344 [2024-12-06 13:40:08.398407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:45:15.344 [2024-12-06 13:40:08.398445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:15.344 [2024-12-06 13:40:08.398460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:15.344 [2024-12-06 13:40:08.398490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:15.344 [2024-12-06 13:40:08.398520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:45:15.344 [2024-12-06 13:40:08.398535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:15.344 [2024-12-06 13:40:08.398550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:15.344 [2024-12-06 13:40:08.398565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:45:15.344 [2024-12-06 13:40:08.398580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:15.344 [2024-12-06 13:40:08.398609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:45:15.344 [2024-12-06 13:40:08.398624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:15.344 [2024-12-06 13:40:08.398654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:15.344 [2024-12-06 13:40:08.398683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:15.344 [2024-12-06 13:40:08.398697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:15.344 [2024-12-06 13:40:08.398726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:15.344 [2024-12-06 13:40:08.398741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:15.344 [2024-12-06 13:40:08.398770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:15.344 [2024-12-06 13:40:08.398785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:15.344 [2024-12-06 13:40:08.398813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:15.344 [2024-12-06 13:40:08.398828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:15.344 [2024-12-06 13:40:08.398857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:15.344 [2024-12-06 13:40:08.398871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:45:15.344 [2024-12-06 13:40:08.398886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:15.344 [2024-12-06 13:40:08.398900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:15.344 [2024-12-06 13:40:08.398914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:45:15.344 [2024-12-06 13:40:08.398928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:15.344 [2024-12-06 13:40:08.398957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:45:15.344 [2024-12-06 13:40:08.398972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:15.344 [2024-12-06 13:40:08.398986] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:15.344 [2024-12-06 13:40:08.399002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:15.344 [2024-12-06 13:40:08.399023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:15.344 [2024-12-06 13:40:08.399038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:15.344 [2024-12-06 13:40:08.399054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:15.344 [2024-12-06 13:40:08.399069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:15.344 [2024-12-06 13:40:08.399083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:15.344 [2024-12-06 13:40:08.399098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:15.344 [2024-12-06 13:40:08.399117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:15.344 [2024-12-06 13:40:08.399132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:15.344 [2024-12-06 13:40:08.399149] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:15.344 [2024-12-06 13:40:08.399168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:15.344 [2024-12-06 13:40:08.399186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:45:15.344 [2024-12-06 13:40:08.399203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:45:15.344 [2024-12-06 13:40:08.399219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:45:15.344 [2024-12-06 13:40:08.399235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:45:15.344 [2024-12-06 13:40:08.399251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:45:15.344 [2024-12-06 13:40:08.399267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:45:15.344 [2024-12-06 13:40:08.399284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:45:15.344 [2024-12-06 13:40:08.399300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:45:15.344 [2024-12-06 13:40:08.399316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:45:15.344 [2024-12-06 13:40:08.399332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:45:15.344 [2024-12-06 13:40:08.399348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:45:15.344 [2024-12-06 13:40:08.399363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:45:15.344 [2024-12-06 13:40:08.399379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:45:15.344 [2024-12-06 13:40:08.399409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:45:15.344 [2024-12-06 13:40:08.399426] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:15.344 [2024-12-06 13:40:08.399444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:15.344 [2024-12-06 13:40:08.399462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:15.344 [2024-12-06 13:40:08.399479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:15.344 [2024-12-06 13:40:08.399494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:15.344 [2024-12-06 13:40:08.399511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:15.344 [2024-12-06 13:40:08.399528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.344 [2024-12-06 13:40:08.399559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:15.345 [2024-12-06 13:40:08.399576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:45:15.345 [2024-12-06 13:40:08.399591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.605 [2024-12-06 13:40:08.459531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.605 [2024-12-06 13:40:08.459607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:15.605 [2024-12-06 13:40:08.459628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.849 ms 00:45:15.605 [2024-12-06 13:40:08.459642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.605 [2024-12-06 13:40:08.459867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.605 [2024-12-06 13:40:08.459883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:15.605 [2024-12-06 13:40:08.459897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:45:15.605 [2024-12-06 13:40:08.459909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.605 [2024-12-06 13:40:08.535118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.605 [2024-12-06 13:40:08.535189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:15.605 [2024-12-06 13:40:08.535211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.175 ms 00:45:15.605 [2024-12-06 13:40:08.535228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.605 [2024-12-06 13:40:08.535367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.605 [2024-12-06 13:40:08.535386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:15.605 [2024-12-06 13:40:08.535422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:15.605 [2024-12-06 13:40:08.535439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.605 [2024-12-06 13:40:08.536267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.605 [2024-12-06 13:40:08.536312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:15.605 [2024-12-06 13:40:08.536334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:45:15.605 [2024-12-06 13:40:08.536350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.605 [2024-12-06 13:40:08.536543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.605 [2024-12-06 13:40:08.536563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:15.606 [2024-12-06 13:40:08.536580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:45:15.606 [2024-12-06 13:40:08.536596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.606 [2024-12-06 13:40:08.565862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.606 [2024-12-06 13:40:08.565921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:15.606 [2024-12-06 13:40:08.565944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.231 ms 00:45:15.606 [2024-12-06 13:40:08.565961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.606 [2024-12-06 13:40:08.588383] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:45:15.606 [2024-12-06 13:40:08.588453] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:15.606 [2024-12-06 13:40:08.588481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.606 [2024-12-06 13:40:08.588501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:15.606 [2024-12-06 13:40:08.588523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.309 ms 00:45:15.606 [2024-12-06 13:40:08.588541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.606 [2024-12-06 13:40:08.623194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.606 [2024-12-06 13:40:08.623254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:15.606 [2024-12-06 13:40:08.623277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.537 ms 00:45:15.606 [2024-12-06 13:40:08.623299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.606 [2024-12-06 13:40:08.643832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.606 [2024-12-06 13:40:08.643877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:15.606 [2024-12-06 13:40:08.643897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.389 ms 00:45:15.606 [2024-12-06 13:40:08.643913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.606 [2024-12-06 13:40:08.664053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.606 [2024-12-06 13:40:08.664096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:15.606 [2024-12-06 13:40:08.664115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.041 ms 00:45:15.606 [2024-12-06 13:40:08.664131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.606 [2024-12-06 13:40:08.665108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.606 [2024-12-06 13:40:08.665147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:15.606 [2024-12-06 13:40:08.665166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:45:15.606 [2024-12-06 13:40:08.665182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.865 [2024-12-06 13:40:08.774592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.865 [2024-12-06 13:40:08.774673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:15.865 [2024-12-06 13:40:08.774698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.365 ms 00:45:15.865 [2024-12-06 13:40:08.774715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.865 [2024-12-06 13:40:08.789111] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:45:15.865 [2024-12-06 13:40:08.818865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.865 [2024-12-06 13:40:08.818946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:15.865 [2024-12-06 13:40:08.818971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.986 ms 00:45:15.865 [2024-12-06 13:40:08.818997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.865 [2024-12-06 13:40:08.819191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.865 [2024-12-06 13:40:08.819229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:15.865 [2024-12-06 13:40:08.819247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:15.865 [2024-12-06 13:40:08.819264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.865 [2024-12-06 13:40:08.819347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.865 [2024-12-06 13:40:08.819365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:15.865 [2024-12-06 13:40:08.819383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:45:15.865 [2024-12-06 13:40:08.819405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.865 [2024-12-06 13:40:08.819484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.865 [2024-12-06 13:40:08.819504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:15.865 [2024-12-06 13:40:08.819521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:45:15.865 [2024-12-06 13:40:08.819537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.865 [2024-12-06 13:40:08.819606] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:15.865 [2024-12-06 13:40:08.819625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.865 [2024-12-06 13:40:08.819642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:15.865 [2024-12-06 13:40:08.819659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:45:15.865 [2024-12-06 13:40:08.819675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.865 [2024-12-06 13:40:08.862759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.865 [2024-12-06 13:40:08.862824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:15.865 [2024-12-06 13:40:08.862847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.046 ms 00:45:15.865 [2024-12-06 13:40:08.862864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.865 [2024-12-06 13:40:08.863019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:15.865 [2024-12-06 13:40:08.863041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:15.866 [2024-12-06 13:40:08.863059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:45:15.866 [2024-12-06 13:40:08.863075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:15.866 [2024-12-06 13:40:08.864697] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:15.866 [2024-12-06 13:40:08.870608] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 520.466 ms, result 0 00:45:15.866 [2024-12-06 13:40:08.871477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:15.866 [2024-12-06 13:40:08.891449] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:16.125  [2024-12-06T13:40:09.225Z] Copying: 4096/4096 [kB] (average 28 MBps)[2024-12-06 13:40:09.036525] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:16.125 [2024-12-06 13:40:09.052976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.125 [2024-12-06 13:40:09.053027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:16.125 [2024-12-06 13:40:09.053059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:16.125 [2024-12-06 13:40:09.053075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.125 [2024-12-06 13:40:09.053108] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:45:16.125 [2024-12-06 13:40:09.058166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.125 [2024-12-06 13:40:09.058206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:16.125 [2024-12-06 13:40:09.058240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.033 ms 00:45:16.125 [2024-12-06 13:40:09.058256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.125 [2024-12-06 13:40:09.060271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.125 [2024-12-06 13:40:09.060315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:16.125 [2024-12-06 13:40:09.060334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.978 ms 00:45:16.125 [2024-12-06 13:40:09.060350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.125 [2024-12-06 13:40:09.064056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.125 [2024-12-06 13:40:09.064098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:16.125 [2024-12-06 13:40:09.064116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:45:16.125 [2024-12-06 13:40:09.064132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.125 [2024-12-06 13:40:09.071084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.125 [2024-12-06 13:40:09.071126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:16.125 [2024-12-06 13:40:09.071143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.909 ms 00:45:16.125 [2024-12-06 13:40:09.071159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.125 [2024-12-06 13:40:09.110918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.125 [2024-12-06 13:40:09.110964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:16.125 [2024-12-06 13:40:09.110983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.692 ms 00:45:16.125 [2024-12-06 13:40:09.110998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.125 [2024-12-06 13:40:09.133372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.125 [2024-12-06 13:40:09.133435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:16.125 [2024-12-06 13:40:09.133455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.303 ms 00:45:16.125 [2024-12-06 13:40:09.133471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.125 [2024-12-06 13:40:09.133694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.125 [2024-12-06 13:40:09.133715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:16.125 [2024-12-06 13:40:09.133747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:45:16.125 [2024-12-06 13:40:09.133764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.125 [2024-12-06 13:40:09.173265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.125 [2024-12-06 13:40:09.173321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:16.125 [2024-12-06 13:40:09.173347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.475 ms 00:45:16.125 [2024-12-06 13:40:09.173366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.125 [2024-12-06 13:40:09.213222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.125 [2024-12-06 13:40:09.213267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:16.125 [2024-12-06 13:40:09.213301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.758 ms 00:45:16.125 [2024-12-06 13:40:09.213317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.386 [2024-12-06 13:40:09.252040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.386 [2024-12-06 13:40:09.252086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:16.386 [2024-12-06 13:40:09.252106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.650 ms 00:45:16.386 [2024-12-06 13:40:09.252122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.386 [2024-12-06 13:40:09.288684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.386 [2024-12-06 13:40:09.288765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:16.386 [2024-12-06 13:40:09.288786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.448 ms 00:45:16.386 [2024-12-06 13:40:09.288801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.386 [2024-12-06 13:40:09.288873] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:16.386 [2024-12-06 13:40:09.288899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:16.386 [2024-12-06 13:40:09.288919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.288936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.288953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.288981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.288998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.289994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:16.387 [2024-12-06 13:40:09.290232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:16.388 [2024-12-06 13:40:09.290828] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:16.388 [2024-12-06 13:40:09.290842] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 484dddcb-97b5-4cf1-8d97-550a0be11fc7 00:45:16.388 [2024-12-06 13:40:09.290857] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:16.388 [2024-12-06 13:40:09.290870] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:16.388 [2024-12-06 13:40:09.290884] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:16.388 [2024-12-06 13:40:09.290898] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:16.388 [2024-12-06 13:40:09.290911] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:16.388 [2024-12-06 13:40:09.290925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:16.388 [2024-12-06 13:40:09.290944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:16.388 [2024-12-06 13:40:09.290956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:16.388 [2024-12-06 13:40:09.290969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:16.388 [2024-12-06 13:40:09.290983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.388 [2024-12-06 13:40:09.290997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:16.388 [2024-12-06 13:40:09.291012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:45:16.388 [2024-12-06 13:40:09.291027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.388 [2024-12-06 13:40:09.311467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.388 [2024-12-06 13:40:09.311506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:16.388 [2024-12-06 13:40:09.311522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.410 ms 00:45:16.388 [2024-12-06 13:40:09.311538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.388 [2024-12-06 13:40:09.312151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:16.388 [2024-12-06 13:40:09.312181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:16.388 [2024-12-06 13:40:09.312197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:45:16.388 [2024-12-06 13:40:09.312211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.388 [2024-12-06 13:40:09.368135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.388 [2024-12-06 13:40:09.368184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:16.388 [2024-12-06 13:40:09.368202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.388 [2024-12-06 13:40:09.368224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.388 [2024-12-06 13:40:09.368362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.388 [2024-12-06 13:40:09.368378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:16.388 [2024-12-06 13:40:09.368392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.388 [2024-12-06 13:40:09.368418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.388 [2024-12-06 13:40:09.368481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.388 [2024-12-06 13:40:09.368497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:16.388 [2024-12-06 13:40:09.368522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.388 [2024-12-06 13:40:09.368536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.388 [2024-12-06 13:40:09.368567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.388 [2024-12-06 13:40:09.368581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:16.388 [2024-12-06 13:40:09.368595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.388 [2024-12-06 13:40:09.368609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.648 [2024-12-06 13:40:09.502831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.648 [2024-12-06 13:40:09.502911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:16.648 [2024-12-06 13:40:09.502950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.648 [2024-12-06 13:40:09.502974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.648 [2024-12-06 13:40:09.619896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.648 [2024-12-06 13:40:09.619975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:16.648 [2024-12-06 13:40:09.619997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.648 [2024-12-06 13:40:09.620014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.648 [2024-12-06 13:40:09.620198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.648 [2024-12-06 13:40:09.620220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:16.648 [2024-12-06 13:40:09.620239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.648 [2024-12-06 13:40:09.620262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.648 [2024-12-06 13:40:09.620310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.648 [2024-12-06 13:40:09.620342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:16.648 [2024-12-06 13:40:09.620359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.648 [2024-12-06 13:40:09.620380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.648 [2024-12-06 13:40:09.620587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.648 [2024-12-06 13:40:09.620612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:16.648 [2024-12-06 13:40:09.620628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.648 [2024-12-06 13:40:09.620644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.648 [2024-12-06 13:40:09.620702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.648 [2024-12-06 13:40:09.620720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:16.648 [2024-12-06 13:40:09.620743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.648 [2024-12-06 13:40:09.620759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.648 [2024-12-06 13:40:09.620818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.648 [2024-12-06 13:40:09.620841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:16.648 [2024-12-06 13:40:09.620858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.648 [2024-12-06 13:40:09.620873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.648 [2024-12-06 13:40:09.620940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:16.648 [2024-12-06 13:40:09.620963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:16.648 [2024-12-06 13:40:09.620978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:16.648 [2024-12-06 13:40:09.620994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:16.648 [2024-12-06 13:40:09.621194] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 568.198 ms, result 0 00:45:18.024 00:45:18.024 00:45:18.024 13:40:10 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79713 00:45:18.024 13:40:10 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:45:18.024 13:40:10 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79713 00:45:18.024 13:40:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79713 ']' 00:45:18.024 13:40:10 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:18.024 13:40:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:18.024 13:40:10 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:18.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:18.024 13:40:10 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:18.024 13:40:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:45:18.282 [2024-12-06 13:40:11.143699] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:45:18.282 [2024-12-06 13:40:11.143897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79713 ] 00:45:18.282 [2024-12-06 13:40:11.339659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:18.540 [2024-12-06 13:40:11.531356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:19.916 13:40:12 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:19.916 13:40:12 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:45:19.917 13:40:12 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:45:19.917 [2024-12-06 13:40:12.974147] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:19.917 [2024-12-06 13:40:12.974241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:20.177 [2024-12-06 13:40:13.168286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.177 [2024-12-06 13:40:13.168352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:20.177 [2024-12-06 13:40:13.168374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:20.177 [2024-12-06 13:40:13.168389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.177 [2024-12-06 13:40:13.172535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.177 [2024-12-06 13:40:13.172577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:20.177 [2024-12-06 13:40:13.172595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.108 ms 00:45:20.177 [2024-12-06 13:40:13.172609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.177 [2024-12-06 13:40:13.172745] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:20.177 [2024-12-06 13:40:13.173852] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:20.177 [2024-12-06 13:40:13.173895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.177 [2024-12-06 13:40:13.173910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:20.177 [2024-12-06 13:40:13.173926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:45:20.177 [2024-12-06 13:40:13.173940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.177 [2024-12-06 13:40:13.176872] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:20.177 [2024-12-06 13:40:13.197286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.177 [2024-12-06 13:40:13.197341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:20.177 [2024-12-06 13:40:13.197360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.420 ms 00:45:20.177 [2024-12-06 13:40:13.197380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.177 [2024-12-06 13:40:13.197525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.177 [2024-12-06 13:40:13.197550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:20.177 [2024-12-06 13:40:13.197565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:45:20.177 [2024-12-06 13:40:13.197584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.177 [2024-12-06 13:40:13.210950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.177 [2024-12-06 13:40:13.211033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:20.177 [2024-12-06 13:40:13.211059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.292 ms 00:45:20.177 [2024-12-06 13:40:13.211081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.177 [2024-12-06 13:40:13.211274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.177 [2024-12-06 13:40:13.211299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:20.177 [2024-12-06 13:40:13.211315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:45:20.177 [2024-12-06 13:40:13.211343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.177 [2024-12-06 13:40:13.211384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.177 [2024-12-06 13:40:13.211429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:20.177 [2024-12-06 13:40:13.211444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:45:20.177 [2024-12-06 13:40:13.211464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.177 [2024-12-06 13:40:13.211499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:45:20.177 [2024-12-06 13:40:13.217766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.177 [2024-12-06 13:40:13.217802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:20.177 [2024-12-06 13:40:13.217824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.269 ms 00:45:20.177 [2024-12-06 13:40:13.217838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.177 [2024-12-06 13:40:13.217915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.177 [2024-12-06 13:40:13.217931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:20.177 [2024-12-06 13:40:13.217951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:20.177 [2024-12-06 13:40:13.217971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.177 [2024-12-06 13:40:13.218006] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:20.177 [2024-12-06 13:40:13.218041] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:20.177 [2024-12-06 13:40:13.218104] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:20.177 [2024-12-06 13:40:13.218130] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:20.177 [2024-12-06 13:40:13.218237] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:20.177 [2024-12-06 13:40:13.218254] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:20.177 [2024-12-06 13:40:13.218289] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:20.177 [2024-12-06 13:40:13.218312] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:20.177 [2024-12-06 13:40:13.218334] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:20.177 [2024-12-06 13:40:13.218350] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:45:20.177 [2024-12-06 13:40:13.218372] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:20.177 [2024-12-06 13:40:13.218393] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:20.177 [2024-12-06 13:40:13.218436] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:20.177 [2024-12-06 13:40:13.218451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.177 [2024-12-06 13:40:13.218475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:20.177 [2024-12-06 13:40:13.218495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:45:20.177 [2024-12-06 13:40:13.218515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.177 [2024-12-06 13:40:13.218616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.178 [2024-12-06 13:40:13.218638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:20.178 [2024-12-06 13:40:13.218655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:45:20.178 [2024-12-06 13:40:13.218681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.178 [2024-12-06 13:40:13.218795] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:20.178 [2024-12-06 13:40:13.218821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:20.178 [2024-12-06 13:40:13.218835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:20.178 [2024-12-06 13:40:13.218860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:20.178 [2024-12-06 13:40:13.218877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:20.178 [2024-12-06 13:40:13.218899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:20.178 [2024-12-06 13:40:13.218912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:45:20.178 [2024-12-06 13:40:13.218936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:20.178 [2024-12-06 13:40:13.218950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:45:20.178 [2024-12-06 13:40:13.218968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:20.178 [2024-12-06 13:40:13.218980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:20.178 [2024-12-06 13:40:13.218999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:45:20.178 [2024-12-06 13:40:13.219012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:20.178 [2024-12-06 13:40:13.219030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:20.178 [2024-12-06 13:40:13.219043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:45:20.178 [2024-12-06 13:40:13.219064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:20.178 [2024-12-06 13:40:13.219077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:20.178 [2024-12-06 13:40:13.219096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:45:20.178 [2024-12-06 13:40:13.219123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:20.178 [2024-12-06 13:40:13.219143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:20.178 [2024-12-06 13:40:13.219155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:45:20.178 [2024-12-06 13:40:13.219174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:20.178 [2024-12-06 13:40:13.219187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:20.178 [2024-12-06 13:40:13.219211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:45:20.178 [2024-12-06 13:40:13.219224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:20.178 [2024-12-06 13:40:13.219242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:20.178 [2024-12-06 13:40:13.219255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:45:20.178 [2024-12-06 13:40:13.219273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:20.178 [2024-12-06 13:40:13.219286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:20.178 [2024-12-06 13:40:13.219306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:45:20.178 [2024-12-06 13:40:13.219320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:20.178 [2024-12-06 13:40:13.219338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:20.178 [2024-12-06 13:40:13.219350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:45:20.178 [2024-12-06 13:40:13.219369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:20.178 [2024-12-06 13:40:13.219382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:20.178 [2024-12-06 13:40:13.219412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:45:20.178 [2024-12-06 13:40:13.219426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:20.178 [2024-12-06 13:40:13.219444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:20.178 [2024-12-06 13:40:13.219457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:45:20.178 [2024-12-06 13:40:13.219481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:20.178 [2024-12-06 13:40:13.219493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:20.178 [2024-12-06 13:40:13.219512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:45:20.178 [2024-12-06 13:40:13.219525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:20.178 [2024-12-06 13:40:13.219540] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:20.178 [2024-12-06 13:40:13.219568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:20.178 [2024-12-06 13:40:13.219585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:20.178 [2024-12-06 13:40:13.219599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:20.178 [2024-12-06 13:40:13.219617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:20.178 [2024-12-06 13:40:13.219630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:20.178 [2024-12-06 13:40:13.219646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:20.178 [2024-12-06 13:40:13.219659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:20.178 [2024-12-06 13:40:13.219675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:20.178 [2024-12-06 13:40:13.219687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:20.178 [2024-12-06 13:40:13.219706] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:20.178 [2024-12-06 13:40:13.219723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:20.178 [2024-12-06 13:40:13.219746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:45:20.178 [2024-12-06 13:40:13.219761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:45:20.178 [2024-12-06 13:40:13.219778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:45:20.178 [2024-12-06 13:40:13.219792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:45:20.178 [2024-12-06 13:40:13.219810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:45:20.178 [2024-12-06 13:40:13.219823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:45:20.178 [2024-12-06 13:40:13.219841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:45:20.178 [2024-12-06 13:40:13.219855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:45:20.178 [2024-12-06 13:40:13.219871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:45:20.178 [2024-12-06 13:40:13.219885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:45:20.178 [2024-12-06 13:40:13.219903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:45:20.178 [2024-12-06 13:40:13.219916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:45:20.178 [2024-12-06 13:40:13.219933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:45:20.178 [2024-12-06 13:40:13.219948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:45:20.178 [2024-12-06 13:40:13.219964] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:20.178 [2024-12-06 13:40:13.219979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:20.178 [2024-12-06 13:40:13.220000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:20.178 [2024-12-06 13:40:13.220014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:20.178 [2024-12-06 13:40:13.220031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:20.178 [2024-12-06 13:40:13.220045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:20.178 [2024-12-06 13:40:13.220063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.179 [2024-12-06 13:40:13.220077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:20.179 [2024-12-06 13:40:13.220093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:45:20.179 [2024-12-06 13:40:13.220110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.276261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.276328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:20.439 [2024-12-06 13:40:13.276358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.059 ms 00:45:20.439 [2024-12-06 13:40:13.276387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.276690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.276714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:20.439 [2024-12-06 13:40:13.276740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:45:20.439 [2024-12-06 13:40:13.276758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.337986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.338044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:20.439 [2024-12-06 13:40:13.338070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.174 ms 00:45:20.439 [2024-12-06 13:40:13.338084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.338221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.338238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:20.439 [2024-12-06 13:40:13.338259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:45:20.439 [2024-12-06 13:40:13.338273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.339219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.339261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:20.439 [2024-12-06 13:40:13.339283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:45:20.439 [2024-12-06 13:40:13.339297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.339490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.339508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:20.439 [2024-12-06 13:40:13.339529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:45:20.439 [2024-12-06 13:40:13.339543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.369612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.369682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:20.439 [2024-12-06 13:40:13.369716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.010 ms 00:45:20.439 [2024-12-06 13:40:13.369730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.401612] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:45:20.439 [2024-12-06 13:40:13.401655] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:20.439 [2024-12-06 13:40:13.401680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.401695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:20.439 [2024-12-06 13:40:13.401714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.741 ms 00:45:20.439 [2024-12-06 13:40:13.401738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.432048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.432092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:20.439 [2024-12-06 13:40:13.432117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.202 ms 00:45:20.439 [2024-12-06 13:40:13.432131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.449968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.450010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:20.439 [2024-12-06 13:40:13.450049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.728 ms 00:45:20.439 [2024-12-06 13:40:13.450067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.467574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.467613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:20.439 [2024-12-06 13:40:13.467637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.411 ms 00:45:20.439 [2024-12-06 13:40:13.467650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.439 [2024-12-06 13:40:13.468579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.439 [2024-12-06 13:40:13.468615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:20.439 [2024-12-06 13:40:13.468637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:45:20.439 [2024-12-06 13:40:13.468651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.698 [2024-12-06 13:40:13.567346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.698 [2024-12-06 13:40:13.567471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:20.698 [2024-12-06 13:40:13.567509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.644 ms 00:45:20.698 [2024-12-06 13:40:13.567524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.698 [2024-12-06 13:40:13.579743] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:45:20.698 [2024-12-06 13:40:13.608577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.698 [2024-12-06 13:40:13.608679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:20.698 [2024-12-06 13:40:13.608708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.895 ms 00:45:20.698 [2024-12-06 13:40:13.608728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.698 [2024-12-06 13:40:13.608905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.698 [2024-12-06 13:40:13.608934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:20.698 [2024-12-06 13:40:13.608954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:20.698 [2024-12-06 13:40:13.608981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.698 [2024-12-06 13:40:13.609071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.698 [2024-12-06 13:40:13.609095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:20.698 [2024-12-06 13:40:13.609111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:45:20.698 [2024-12-06 13:40:13.609138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.698 [2024-12-06 13:40:13.609173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.698 [2024-12-06 13:40:13.609194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:20.698 [2024-12-06 13:40:13.609208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:20.698 [2024-12-06 13:40:13.609227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.698 [2024-12-06 13:40:13.609278] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:20.698 [2024-12-06 13:40:13.609302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.698 [2024-12-06 13:40:13.609322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:20.698 [2024-12-06 13:40:13.609339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:45:20.698 [2024-12-06 13:40:13.609352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.698 [2024-12-06 13:40:13.646458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.698 [2024-12-06 13:40:13.646503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:20.698 [2024-12-06 13:40:13.646524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.063 ms 00:45:20.698 [2024-12-06 13:40:13.646539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.698 [2024-12-06 13:40:13.646665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.698 [2024-12-06 13:40:13.646681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:20.698 [2024-12-06 13:40:13.646700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:45:20.698 [2024-12-06 13:40:13.646717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.698 [2024-12-06 13:40:13.648227] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:20.698 [2024-12-06 13:40:13.653066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 479.505 ms, result 0 00:45:20.698 [2024-12-06 13:40:13.654256] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:20.698 Some configs were skipped because the RPC state that can call them passed over. 00:45:20.698 13:40:13 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:45:20.957 [2024-12-06 13:40:13.881459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.957 [2024-12-06 13:40:13.881543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:45:20.957 [2024-12-06 13:40:13.881570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:45:20.957 [2024-12-06 13:40:13.881589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.957 [2024-12-06 13:40:13.881638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.838 ms, result 0 00:45:20.957 true 00:45:20.957 13:40:13 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:45:21.217 [2024-12-06 13:40:14.077178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.217 [2024-12-06 13:40:14.077240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:45:21.217 [2024-12-06 13:40:14.077265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:45:21.217 [2024-12-06 13:40:14.077281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.217 [2024-12-06 13:40:14.077335] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.241 ms, result 0 00:45:21.217 true 00:45:21.217 13:40:14 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79713 00:45:21.217 13:40:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79713 ']' 00:45:21.217 13:40:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79713 00:45:21.217 13:40:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:45:21.217 13:40:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:21.217 13:40:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79713 00:45:21.217 13:40:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:21.217 13:40:14 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:21.217 killing process with pid 79713 00:45:21.217 13:40:14 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79713' 00:45:21.217 13:40:14 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79713 00:45:21.217 13:40:14 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79713 00:45:22.604 [2024-12-06 13:40:15.361008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.361084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:22.604 [2024-12-06 13:40:15.361105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:22.604 [2024-12-06 13:40:15.361121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.361154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:45:22.604 [2024-12-06 13:40:15.365498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.365539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:22.604 [2024-12-06 13:40:15.365562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.315 ms 00:45:22.604 [2024-12-06 13:40:15.365576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.365894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.365912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:22.604 [2024-12-06 13:40:15.365929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:45:22.604 [2024-12-06 13:40:15.365943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.369500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.369540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:22.604 [2024-12-06 13:40:15.369561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.527 ms 00:45:22.604 [2024-12-06 13:40:15.369575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.375594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.375638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:22.604 [2024-12-06 13:40:15.375661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.958 ms 00:45:22.604 [2024-12-06 13:40:15.375678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.390380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.390436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:22.604 [2024-12-06 13:40:15.390459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.599 ms 00:45:22.604 [2024-12-06 13:40:15.390473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.400496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.400538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:22.604 [2024-12-06 13:40:15.400557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.971 ms 00:45:22.604 [2024-12-06 13:40:15.400571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.400714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.400730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:22.604 [2024-12-06 13:40:15.400747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:45:22.604 [2024-12-06 13:40:15.400761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.415686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.415725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:22.604 [2024-12-06 13:40:15.415744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.894 ms 00:45:22.604 [2024-12-06 13:40:15.415757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.430108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.430154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:22.604 [2024-12-06 13:40:15.430184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.298 ms 00:45:22.604 [2024-12-06 13:40:15.430202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.444108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.444147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:22.604 [2024-12-06 13:40:15.444167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.848 ms 00:45:22.604 [2024-12-06 13:40:15.444179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.458117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.604 [2024-12-06 13:40:15.458155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:22.604 [2024-12-06 13:40:15.458174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.856 ms 00:45:22.604 [2024-12-06 13:40:15.458187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.604 [2024-12-06 13:40:15.458236] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:22.604 [2024-12-06 13:40:15.458257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.458980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:22.605 [2024-12-06 13:40:15.459803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.459993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.460012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:22.606 [2024-12-06 13:40:15.460045] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:22.606 [2024-12-06 13:40:15.460070] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 484dddcb-97b5-4cf1-8d97-550a0be11fc7 00:45:22.606 [2024-12-06 13:40:15.460090] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:22.606 [2024-12-06 13:40:15.460106] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:22.606 [2024-12-06 13:40:15.460119] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:22.606 [2024-12-06 13:40:15.460136] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:22.606 [2024-12-06 13:40:15.460149] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:22.606 [2024-12-06 13:40:15.460166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:22.606 [2024-12-06 13:40:15.460179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:22.606 [2024-12-06 13:40:15.460195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:22.606 [2024-12-06 13:40:15.460207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:22.606 [2024-12-06 13:40:15.460224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.606 [2024-12-06 13:40:15.460238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:22.606 [2024-12-06 13:40:15.460255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:45:22.606 [2024-12-06 13:40:15.460269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.606 [2024-12-06 13:40:15.480451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.606 [2024-12-06 13:40:15.480495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:22.606 [2024-12-06 13:40:15.480535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.133 ms 00:45:22.606 [2024-12-06 13:40:15.480555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.606 [2024-12-06 13:40:15.481161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.606 [2024-12-06 13:40:15.481192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:22.606 [2024-12-06 13:40:15.481221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:45:22.606 [2024-12-06 13:40:15.481235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.606 [2024-12-06 13:40:15.552868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.606 [2024-12-06 13:40:15.552914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:22.606 [2024-12-06 13:40:15.552936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.606 [2024-12-06 13:40:15.552952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.606 [2024-12-06 13:40:15.553083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.606 [2024-12-06 13:40:15.553099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:22.606 [2024-12-06 13:40:15.553121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.606 [2024-12-06 13:40:15.553136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.606 [2024-12-06 13:40:15.553202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.606 [2024-12-06 13:40:15.553218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:22.606 [2024-12-06 13:40:15.553239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.606 [2024-12-06 13:40:15.553254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.606 [2024-12-06 13:40:15.553283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.606 [2024-12-06 13:40:15.553297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:22.606 [2024-12-06 13:40:15.553314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.606 [2024-12-06 13:40:15.553332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.606 [2024-12-06 13:40:15.694263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.606 [2024-12-06 13:40:15.694344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:22.606 [2024-12-06 13:40:15.694370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.606 [2024-12-06 13:40:15.694382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.864 [2024-12-06 13:40:15.806096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.864 [2024-12-06 13:40:15.806175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:22.864 [2024-12-06 13:40:15.806197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.864 [2024-12-06 13:40:15.806213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.864 [2024-12-06 13:40:15.806366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.864 [2024-12-06 13:40:15.806380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:22.864 [2024-12-06 13:40:15.806426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.864 [2024-12-06 13:40:15.806439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.864 [2024-12-06 13:40:15.806480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.864 [2024-12-06 13:40:15.806492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:22.864 [2024-12-06 13:40:15.806509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.864 [2024-12-06 13:40:15.806520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.864 [2024-12-06 13:40:15.806675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.864 [2024-12-06 13:40:15.806689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:22.864 [2024-12-06 13:40:15.806706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.864 [2024-12-06 13:40:15.806718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.864 [2024-12-06 13:40:15.806770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.864 [2024-12-06 13:40:15.806783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:22.864 [2024-12-06 13:40:15.806802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.864 [2024-12-06 13:40:15.806813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.864 [2024-12-06 13:40:15.806877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.864 [2024-12-06 13:40:15.806889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:22.864 [2024-12-06 13:40:15.806911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.864 [2024-12-06 13:40:15.806922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.864 [2024-12-06 13:40:15.806983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.864 [2024-12-06 13:40:15.806996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:22.864 [2024-12-06 13:40:15.807013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.864 [2024-12-06 13:40:15.807024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.864 [2024-12-06 13:40:15.807216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 446.166 ms, result 0 00:45:24.238 13:40:16 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:24.238 [2024-12-06 13:40:17.109807] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:45:24.238 [2024-12-06 13:40:17.110007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79792 ] 00:45:24.238 [2024-12-06 13:40:17.299191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:24.496 [2024-12-06 13:40:17.444093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:25.064 [2024-12-06 13:40:17.897070] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:25.064 [2024-12-06 13:40:17.897155] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:25.064 [2024-12-06 13:40:18.067811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.064 [2024-12-06 13:40:18.067895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:25.064 [2024-12-06 13:40:18.067915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:25.064 [2024-12-06 13:40:18.067928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.064 [2024-12-06 13:40:18.071625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.064 [2024-12-06 13:40:18.071673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:25.064 [2024-12-06 13:40:18.071687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:45:25.064 [2024-12-06 13:40:18.071699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.064 [2024-12-06 13:40:18.071814] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:25.064 [2024-12-06 13:40:18.072953] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:25.064 [2024-12-06 13:40:18.072991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.064 [2024-12-06 13:40:18.073004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:25.064 [2024-12-06 13:40:18.073017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:45:25.064 [2024-12-06 13:40:18.073029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.064 [2024-12-06 13:40:18.075971] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:25.064 [2024-12-06 13:40:18.097672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.064 [2024-12-06 13:40:18.097712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:25.064 [2024-12-06 13:40:18.097730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.702 ms 00:45:25.064 [2024-12-06 13:40:18.097742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.064 [2024-12-06 13:40:18.097861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.064 [2024-12-06 13:40:18.097876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:25.064 [2024-12-06 13:40:18.097889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:45:25.064 [2024-12-06 13:40:18.097900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.064 [2024-12-06 13:40:18.110610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.064 [2024-12-06 13:40:18.110643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:25.064 [2024-12-06 13:40:18.110658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.660 ms 00:45:25.064 [2024-12-06 13:40:18.110669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.064 [2024-12-06 13:40:18.110809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.064 [2024-12-06 13:40:18.110825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:25.064 [2024-12-06 13:40:18.110837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:45:25.064 [2024-12-06 13:40:18.110848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.064 [2024-12-06 13:40:18.110890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.064 [2024-12-06 13:40:18.110902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:25.064 [2024-12-06 13:40:18.110914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:25.064 [2024-12-06 13:40:18.110924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.064 [2024-12-06 13:40:18.110953] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:45:25.064 [2024-12-06 13:40:18.116843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.064 [2024-12-06 13:40:18.116884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:25.064 [2024-12-06 13:40:18.116897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.897 ms 00:45:25.064 [2024-12-06 13:40:18.116909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.064 [2024-12-06 13:40:18.116969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.064 [2024-12-06 13:40:18.116983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:25.064 [2024-12-06 13:40:18.116995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:45:25.064 [2024-12-06 13:40:18.117006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.064 [2024-12-06 13:40:18.117033] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:25.064 [2024-12-06 13:40:18.117065] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:25.064 [2024-12-06 13:40:18.117104] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:25.065 [2024-12-06 13:40:18.117125] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:25.065 [2024-12-06 13:40:18.117226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:25.065 [2024-12-06 13:40:18.117241] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:25.065 [2024-12-06 13:40:18.117255] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:25.065 [2024-12-06 13:40:18.117273] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:25.065 [2024-12-06 13:40:18.117286] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:25.065 [2024-12-06 13:40:18.117298] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:45:25.065 [2024-12-06 13:40:18.117309] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:25.065 [2024-12-06 13:40:18.117319] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:25.065 [2024-12-06 13:40:18.117329] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:25.065 [2024-12-06 13:40:18.117341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.065 [2024-12-06 13:40:18.117353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:25.065 [2024-12-06 13:40:18.117364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:45:25.065 [2024-12-06 13:40:18.117374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.065 [2024-12-06 13:40:18.117468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.065 [2024-12-06 13:40:18.117488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:25.065 [2024-12-06 13:40:18.117500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:45:25.065 [2024-12-06 13:40:18.117511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.065 [2024-12-06 13:40:18.117607] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:25.065 [2024-12-06 13:40:18.117621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:25.065 [2024-12-06 13:40:18.117631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:25.065 [2024-12-06 13:40:18.117643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:25.065 [2024-12-06 13:40:18.117654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:25.065 [2024-12-06 13:40:18.117664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:25.065 [2024-12-06 13:40:18.117674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:45:25.065 [2024-12-06 13:40:18.117683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:25.065 [2024-12-06 13:40:18.117693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:45:25.065 [2024-12-06 13:40:18.117703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:25.065 [2024-12-06 13:40:18.117712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:25.065 [2024-12-06 13:40:18.117735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:45:25.065 [2024-12-06 13:40:18.117744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:25.065 [2024-12-06 13:40:18.117754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:25.065 [2024-12-06 13:40:18.117765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:45:25.065 [2024-12-06 13:40:18.117775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:25.065 [2024-12-06 13:40:18.117785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:25.065 [2024-12-06 13:40:18.117795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:45:25.065 [2024-12-06 13:40:18.117805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:25.065 [2024-12-06 13:40:18.117815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:25.065 [2024-12-06 13:40:18.117825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:45:25.065 [2024-12-06 13:40:18.117834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:25.065 [2024-12-06 13:40:18.117844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:25.065 [2024-12-06 13:40:18.117854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:45:25.065 [2024-12-06 13:40:18.117863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:25.065 [2024-12-06 13:40:18.117872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:25.065 [2024-12-06 13:40:18.117883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:45:25.065 [2024-12-06 13:40:18.117893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:25.065 [2024-12-06 13:40:18.117902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:25.065 [2024-12-06 13:40:18.117911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:45:25.065 [2024-12-06 13:40:18.117921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:25.065 [2024-12-06 13:40:18.117930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:25.065 [2024-12-06 13:40:18.117939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:45:25.065 [2024-12-06 13:40:18.117948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:25.065 [2024-12-06 13:40:18.117957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:25.065 [2024-12-06 13:40:18.117967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:45:25.065 [2024-12-06 13:40:18.117976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:25.065 [2024-12-06 13:40:18.117985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:25.065 [2024-12-06 13:40:18.117994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:45:25.065 [2024-12-06 13:40:18.118003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:25.065 [2024-12-06 13:40:18.118012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:25.065 [2024-12-06 13:40:18.118022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:45:25.065 [2024-12-06 13:40:18.118032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:25.065 [2024-12-06 13:40:18.118041] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:25.065 [2024-12-06 13:40:18.118051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:25.065 [2024-12-06 13:40:18.118067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:25.065 [2024-12-06 13:40:18.118078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:25.065 [2024-12-06 13:40:18.118089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:25.065 [2024-12-06 13:40:18.118099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:25.065 [2024-12-06 13:40:18.118108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:25.065 [2024-12-06 13:40:18.118118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:25.065 [2024-12-06 13:40:18.118127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:25.065 [2024-12-06 13:40:18.118137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:25.065 [2024-12-06 13:40:18.118147] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:25.065 [2024-12-06 13:40:18.118160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:25.065 [2024-12-06 13:40:18.118172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:45:25.065 [2024-12-06 13:40:18.118183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:45:25.065 [2024-12-06 13:40:18.118194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:45:25.065 [2024-12-06 13:40:18.118204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:45:25.065 [2024-12-06 13:40:18.118215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:45:25.065 [2024-12-06 13:40:18.118225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:45:25.065 [2024-12-06 13:40:18.118236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:45:25.065 [2024-12-06 13:40:18.118247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:45:25.065 [2024-12-06 13:40:18.118257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:45:25.065 [2024-12-06 13:40:18.118268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:45:25.065 [2024-12-06 13:40:18.118278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:45:25.065 [2024-12-06 13:40:18.118288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:45:25.065 [2024-12-06 13:40:18.118299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:45:25.065 [2024-12-06 13:40:18.118310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:45:25.065 [2024-12-06 13:40:18.118320] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:25.065 [2024-12-06 13:40:18.118332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:25.065 [2024-12-06 13:40:18.118344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:25.065 [2024-12-06 13:40:18.118354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:25.065 [2024-12-06 13:40:18.118366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:25.065 [2024-12-06 13:40:18.118376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:25.065 [2024-12-06 13:40:18.118387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.065 [2024-12-06 13:40:18.118420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:25.065 [2024-12-06 13:40:18.118433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:45:25.065 [2024-12-06 13:40:18.118443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.169080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.169141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:25.325 [2024-12-06 13:40:18.169161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.565 ms 00:45:25.325 [2024-12-06 13:40:18.169173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.169382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.169408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:25.325 [2024-12-06 13:40:18.169421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:45:25.325 [2024-12-06 13:40:18.169432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.235528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.235612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:25.325 [2024-12-06 13:40:18.235630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.067 ms 00:45:25.325 [2024-12-06 13:40:18.235642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.235767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.235780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:25.325 [2024-12-06 13:40:18.235792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:25.325 [2024-12-06 13:40:18.235802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.236601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.236626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:25.325 [2024-12-06 13:40:18.236644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:45:25.325 [2024-12-06 13:40:18.236655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.236799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.236814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:25.325 [2024-12-06 13:40:18.236825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:45:25.325 [2024-12-06 13:40:18.236836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.261441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.261490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:25.325 [2024-12-06 13:40:18.261507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.578 ms 00:45:25.325 [2024-12-06 13:40:18.261519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.281718] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:45:25.325 [2024-12-06 13:40:18.281758] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:25.325 [2024-12-06 13:40:18.281776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.281787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:25.325 [2024-12-06 13:40:18.281800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.089 ms 00:45:25.325 [2024-12-06 13:40:18.281811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.312872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.312918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:25.325 [2024-12-06 13:40:18.312934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.950 ms 00:45:25.325 [2024-12-06 13:40:18.312946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.332115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.332157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:25.325 [2024-12-06 13:40:18.332173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.073 ms 00:45:25.325 [2024-12-06 13:40:18.332188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.350878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.350934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:25.325 [2024-12-06 13:40:18.350966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.585 ms 00:45:25.325 [2024-12-06 13:40:18.350976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.325 [2024-12-06 13:40:18.351880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.325 [2024-12-06 13:40:18.351913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:25.325 [2024-12-06 13:40:18.351927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:45:25.325 [2024-12-06 13:40:18.351950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.584 [2024-12-06 13:40:18.453105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.584 [2024-12-06 13:40:18.453225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:25.584 [2024-12-06 13:40:18.453248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.114 ms 00:45:25.584 [2024-12-06 13:40:18.453260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.584 [2024-12-06 13:40:18.467056] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:45:25.584 [2024-12-06 13:40:18.496855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.584 [2024-12-06 13:40:18.496962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:25.584 [2024-12-06 13:40:18.496984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.386 ms 00:45:25.584 [2024-12-06 13:40:18.497007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.584 [2024-12-06 13:40:18.497214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.584 [2024-12-06 13:40:18.497230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:25.584 [2024-12-06 13:40:18.497243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:25.584 [2024-12-06 13:40:18.497254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.584 [2024-12-06 13:40:18.497333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.584 [2024-12-06 13:40:18.497346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:25.584 [2024-12-06 13:40:18.497358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:45:25.584 [2024-12-06 13:40:18.497375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.584 [2024-12-06 13:40:18.497441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.584 [2024-12-06 13:40:18.497458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:25.584 [2024-12-06 13:40:18.497470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:45:25.584 [2024-12-06 13:40:18.497480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.584 [2024-12-06 13:40:18.497529] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:25.584 [2024-12-06 13:40:18.497543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.584 [2024-12-06 13:40:18.497554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:25.584 [2024-12-06 13:40:18.497565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:45:25.584 [2024-12-06 13:40:18.497575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.584 [2024-12-06 13:40:18.537581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.584 [2024-12-06 13:40:18.537649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:25.585 [2024-12-06 13:40:18.537684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.978 ms 00:45:25.585 [2024-12-06 13:40:18.537696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.585 [2024-12-06 13:40:18.537841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:25.585 [2024-12-06 13:40:18.537858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:25.585 [2024-12-06 13:40:18.537870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:45:25.585 [2024-12-06 13:40:18.537881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:25.585 [2024-12-06 13:40:18.539320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:25.585 [2024-12-06 13:40:18.544524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 471.104 ms, result 0 00:45:25.585 [2024-12-06 13:40:18.545544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:25.585 [2024-12-06 13:40:18.564459] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:26.962  [2024-12-06T13:40:20.629Z] Copying: 33/256 [MB] (33 MBps) [2024-12-06T13:40:22.004Z] Copying: 64/256 [MB] (31 MBps) [2024-12-06T13:40:22.940Z] Copying: 94/256 [MB] (30 MBps) [2024-12-06T13:40:23.875Z] Copying: 125/256 [MB] (30 MBps) [2024-12-06T13:40:24.807Z] Copying: 155/256 [MB] (29 MBps) [2024-12-06T13:40:25.739Z] Copying: 184/256 [MB] (29 MBps) [2024-12-06T13:40:26.671Z] Copying: 215/256 [MB] (30 MBps) [2024-12-06T13:40:26.936Z] Copying: 246/256 [MB] (30 MBps) [2024-12-06T13:40:27.519Z] Copying: 256/256 [MB] (average 30 MBps)[2024-12-06 13:40:27.382752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:34.419 [2024-12-06 13:40:27.399831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.420 [2024-12-06 13:40:27.399911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:34.420 [2024-12-06 13:40:27.399941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:34.420 [2024-12-06 13:40:27.399953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.420 [2024-12-06 13:40:27.399985] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:45:34.420 [2024-12-06 13:40:27.405274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.420 [2024-12-06 13:40:27.405307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:34.420 [2024-12-06 13:40:27.405320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.270 ms 00:45:34.420 [2024-12-06 13:40:27.405332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.420 [2024-12-06 13:40:27.405663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.420 [2024-12-06 13:40:27.405686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:34.420 [2024-12-06 13:40:27.405698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:45:34.420 [2024-12-06 13:40:27.405710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.420 [2024-12-06 13:40:27.408849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.420 [2024-12-06 13:40:27.408873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:34.420 [2024-12-06 13:40:27.408886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.115 ms 00:45:34.420 [2024-12-06 13:40:27.408898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.420 [2024-12-06 13:40:27.414691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.420 [2024-12-06 13:40:27.414722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:34.420 [2024-12-06 13:40:27.414735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.770 ms 00:45:34.420 [2024-12-06 13:40:27.414746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.420 [2024-12-06 13:40:27.460682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.420 [2024-12-06 13:40:27.460758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:34.420 [2024-12-06 13:40:27.460778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.839 ms 00:45:34.420 [2024-12-06 13:40:27.460790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.420 [2024-12-06 13:40:27.481870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.420 [2024-12-06 13:40:27.481914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:34.420 [2024-12-06 13:40:27.481954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.000 ms 00:45:34.420 [2024-12-06 13:40:27.481965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.420 [2024-12-06 13:40:27.482134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.420 [2024-12-06 13:40:27.482149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:34.420 [2024-12-06 13:40:27.482175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:45:34.420 [2024-12-06 13:40:27.482186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.679 [2024-12-06 13:40:27.521764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.679 [2024-12-06 13:40:27.521815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:34.679 [2024-12-06 13:40:27.521834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.556 ms 00:45:34.679 [2024-12-06 13:40:27.521847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.679 [2024-12-06 13:40:27.558984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.679 [2024-12-06 13:40:27.559028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:34.679 [2024-12-06 13:40:27.559060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.079 ms 00:45:34.679 [2024-12-06 13:40:27.559070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.679 [2024-12-06 13:40:27.596560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.679 [2024-12-06 13:40:27.596602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:34.679 [2024-12-06 13:40:27.596616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.444 ms 00:45:34.679 [2024-12-06 13:40:27.596627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.679 [2024-12-06 13:40:27.633356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.679 [2024-12-06 13:40:27.633408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:34.679 [2024-12-06 13:40:27.633423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.620 ms 00:45:34.679 [2024-12-06 13:40:27.633435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.679 [2024-12-06 13:40:27.633486] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:34.679 [2024-12-06 13:40:27.633506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:34.679 [2024-12-06 13:40:27.633676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.633996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:34.680 [2024-12-06 13:40:27.634652] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:34.681 [2024-12-06 13:40:27.634663] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 484dddcb-97b5-4cf1-8d97-550a0be11fc7 00:45:34.681 [2024-12-06 13:40:27.634675] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:34.681 [2024-12-06 13:40:27.634685] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:34.681 [2024-12-06 13:40:27.634696] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:34.681 [2024-12-06 13:40:27.634708] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:34.681 [2024-12-06 13:40:27.634718] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:34.681 [2024-12-06 13:40:27.634729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:34.681 [2024-12-06 13:40:27.634745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:34.681 [2024-12-06 13:40:27.634754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:34.681 [2024-12-06 13:40:27.634764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:34.681 [2024-12-06 13:40:27.634774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.681 [2024-12-06 13:40:27.634785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:34.681 [2024-12-06 13:40:27.634797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:45:34.681 [2024-12-06 13:40:27.634807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.681 [2024-12-06 13:40:27.656571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.681 [2024-12-06 13:40:27.656610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:34.681 [2024-12-06 13:40:27.656624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.741 ms 00:45:34.681 [2024-12-06 13:40:27.656636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.681 [2024-12-06 13:40:27.657283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.681 [2024-12-06 13:40:27.657306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:34.681 [2024-12-06 13:40:27.657318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:45:34.681 [2024-12-06 13:40:27.657329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.681 [2024-12-06 13:40:27.717725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.681 [2024-12-06 13:40:27.717788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:34.681 [2024-12-06 13:40:27.717803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.681 [2024-12-06 13:40:27.717822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.681 [2024-12-06 13:40:27.717967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.681 [2024-12-06 13:40:27.717981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:34.681 [2024-12-06 13:40:27.717993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.681 [2024-12-06 13:40:27.718005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.681 [2024-12-06 13:40:27.718074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.681 [2024-12-06 13:40:27.718089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:34.681 [2024-12-06 13:40:27.718101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.681 [2024-12-06 13:40:27.718113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.681 [2024-12-06 13:40:27.718140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.681 [2024-12-06 13:40:27.718152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:34.681 [2024-12-06 13:40:27.718164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.681 [2024-12-06 13:40:27.718176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.939 [2024-12-06 13:40:27.859515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.939 [2024-12-06 13:40:27.859618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:34.939 [2024-12-06 13:40:27.859638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.939 [2024-12-06 13:40:27.859650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.939 [2024-12-06 13:40:27.970063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.939 [2024-12-06 13:40:27.970148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:34.939 [2024-12-06 13:40:27.970166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.939 [2024-12-06 13:40:27.970178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.939 [2024-12-06 13:40:27.970303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.939 [2024-12-06 13:40:27.970316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:34.939 [2024-12-06 13:40:27.970329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.939 [2024-12-06 13:40:27.970340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.939 [2024-12-06 13:40:27.970375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.939 [2024-12-06 13:40:27.970411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:34.939 [2024-12-06 13:40:27.970423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.939 [2024-12-06 13:40:27.970434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.939 [2024-12-06 13:40:27.970576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.939 [2024-12-06 13:40:27.970591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:34.939 [2024-12-06 13:40:27.970602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.939 [2024-12-06 13:40:27.970613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.939 [2024-12-06 13:40:27.970655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.939 [2024-12-06 13:40:27.970668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:34.939 [2024-12-06 13:40:27.970684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.939 [2024-12-06 13:40:27.970695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.939 [2024-12-06 13:40:27.970745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.939 [2024-12-06 13:40:27.970757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:34.939 [2024-12-06 13:40:27.970769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.939 [2024-12-06 13:40:27.970779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.939 [2024-12-06 13:40:27.970866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:34.939 [2024-12-06 13:40:27.970887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:34.939 [2024-12-06 13:40:27.970899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:34.939 [2024-12-06 13:40:27.970910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.939 [2024-12-06 13:40:27.971105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 571.280 ms, result 0 00:45:36.447 00:45:36.447 00:45:36.447 13:40:29 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:45:36.705 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:45:36.705 13:40:29 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:45:36.705 13:40:29 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:45:36.705 13:40:29 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:45:36.705 13:40:29 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:36.705 13:40:29 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:45:36.705 13:40:29 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:45:36.964 13:40:29 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79713 00:45:36.964 13:40:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79713 ']' 00:45:36.964 13:40:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79713 00:45:36.964 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79713) - No such process 00:45:36.965 Process with pid 79713 is not found 00:45:36.965 13:40:29 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79713 is not found' 00:45:36.965 00:45:36.965 real 1m10.686s 00:45:36.965 user 1m35.935s 00:45:36.965 sys 0m8.819s 00:45:36.965 13:40:29 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:36.965 ************************************ 00:45:36.965 END TEST ftl_trim 00:45:36.965 ************************************ 00:45:36.965 13:40:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:45:36.965 13:40:29 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:45:36.965 13:40:29 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:45:36.965 13:40:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:36.965 13:40:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:45:36.965 ************************************ 00:45:36.965 START TEST ftl_restore 00:45:36.965 ************************************ 00:45:36.965 13:40:29 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:45:36.965 * Looking for test storage... 00:45:36.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:45:36.965 13:40:29 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:36.965 13:40:29 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:45:36.965 13:40:29 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:36.965 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:36.965 13:40:30 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:37.224 13:40:30 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:45:37.224 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:37.224 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:37.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:37.224 --rc genhtml_branch_coverage=1 00:45:37.224 --rc genhtml_function_coverage=1 00:45:37.224 --rc genhtml_legend=1 00:45:37.224 --rc geninfo_all_blocks=1 00:45:37.224 --rc geninfo_unexecuted_blocks=1 00:45:37.224 00:45:37.224 ' 00:45:37.224 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:37.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:37.224 --rc genhtml_branch_coverage=1 00:45:37.224 --rc genhtml_function_coverage=1 00:45:37.224 --rc genhtml_legend=1 00:45:37.224 --rc geninfo_all_blocks=1 00:45:37.224 --rc geninfo_unexecuted_blocks=1 00:45:37.224 00:45:37.224 ' 00:45:37.224 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:37.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:37.224 --rc genhtml_branch_coverage=1 00:45:37.224 --rc genhtml_function_coverage=1 00:45:37.224 --rc genhtml_legend=1 00:45:37.224 --rc geninfo_all_blocks=1 00:45:37.225 --rc geninfo_unexecuted_blocks=1 00:45:37.225 00:45:37.225 ' 00:45:37.225 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:37.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:37.225 --rc genhtml_branch_coverage=1 00:45:37.225 --rc genhtml_function_coverage=1 00:45:37.225 --rc genhtml_legend=1 00:45:37.225 --rc geninfo_all_blocks=1 00:45:37.225 --rc geninfo_unexecuted_blocks=1 00:45:37.225 00:45:37.225 ' 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.R15dSJSC6B 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79985 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79985 00:45:37.225 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79985 ']' 00:45:37.225 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:37.225 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:37.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:37.225 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:37.225 13:40:30 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:37.225 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:37.225 13:40:30 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:45:37.225 [2024-12-06 13:40:30.272366] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:45:37.225 [2024-12-06 13:40:30.272580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79985 ] 00:45:37.484 [2024-12-06 13:40:30.467774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:37.743 [2024-12-06 13:40:30.605584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:38.681 13:40:31 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:38.681 13:40:31 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:45:38.681 13:40:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:45:38.681 13:40:31 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:45:38.681 13:40:31 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:45:38.681 13:40:31 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:45:38.681 13:40:31 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:45:38.681 13:40:31 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:45:38.940 13:40:31 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:45:38.940 13:40:31 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:45:38.940 13:40:31 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:45:38.940 13:40:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:45:38.940 13:40:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:38.940 13:40:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:45:38.940 13:40:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:45:38.940 13:40:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:45:39.199 13:40:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:39.199 { 00:45:39.199 "name": "nvme0n1", 00:45:39.199 "aliases": [ 00:45:39.199 "069df667-a29c-486f-aed0-07ea6b4a2034" 00:45:39.199 ], 00:45:39.199 "product_name": "NVMe disk", 00:45:39.199 "block_size": 4096, 00:45:39.199 "num_blocks": 1310720, 00:45:39.199 "uuid": "069df667-a29c-486f-aed0-07ea6b4a2034", 00:45:39.199 "numa_id": -1, 00:45:39.199 "assigned_rate_limits": { 00:45:39.199 "rw_ios_per_sec": 0, 00:45:39.200 "rw_mbytes_per_sec": 0, 00:45:39.200 "r_mbytes_per_sec": 0, 00:45:39.200 "w_mbytes_per_sec": 0 00:45:39.200 }, 00:45:39.200 "claimed": true, 00:45:39.200 "claim_type": "read_many_write_one", 00:45:39.200 "zoned": false, 00:45:39.200 "supported_io_types": { 00:45:39.200 "read": true, 00:45:39.200 "write": true, 00:45:39.200 "unmap": true, 00:45:39.200 "flush": true, 00:45:39.200 "reset": true, 00:45:39.200 "nvme_admin": true, 00:45:39.200 "nvme_io": true, 00:45:39.200 "nvme_io_md": false, 00:45:39.200 "write_zeroes": true, 00:45:39.200 "zcopy": false, 00:45:39.200 "get_zone_info": false, 00:45:39.200 "zone_management": false, 00:45:39.200 "zone_append": false, 00:45:39.200 "compare": true, 00:45:39.200 "compare_and_write": false, 00:45:39.200 "abort": true, 00:45:39.200 "seek_hole": false, 00:45:39.200 "seek_data": false, 00:45:39.200 "copy": true, 00:45:39.200 "nvme_iov_md": false 00:45:39.200 }, 00:45:39.200 "driver_specific": { 00:45:39.200 "nvme": [ 00:45:39.200 { 00:45:39.200 "pci_address": "0000:00:11.0", 00:45:39.200 "trid": { 00:45:39.200 "trtype": "PCIe", 00:45:39.200 "traddr": "0000:00:11.0" 00:45:39.200 }, 00:45:39.200 "ctrlr_data": { 00:45:39.200 "cntlid": 0, 00:45:39.200 "vendor_id": "0x1b36", 00:45:39.200 "model_number": "QEMU NVMe Ctrl", 00:45:39.200 "serial_number": "12341", 00:45:39.200 "firmware_revision": "8.0.0", 00:45:39.200 "subnqn": "nqn.2019-08.org.qemu:12341", 00:45:39.200 "oacs": { 00:45:39.200 "security": 0, 00:45:39.200 "format": 1, 00:45:39.200 "firmware": 0, 00:45:39.200 "ns_manage": 1 00:45:39.200 }, 00:45:39.200 "multi_ctrlr": false, 00:45:39.200 "ana_reporting": false 00:45:39.200 }, 00:45:39.200 "vs": { 00:45:39.200 "nvme_version": "1.4" 00:45:39.200 }, 00:45:39.200 "ns_data": { 00:45:39.200 "id": 1, 00:45:39.200 "can_share": false 00:45:39.200 } 00:45:39.200 } 00:45:39.200 ], 00:45:39.200 "mp_policy": "active_passive" 00:45:39.200 } 00:45:39.200 } 00:45:39.200 ]' 00:45:39.200 13:40:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:39.200 13:40:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:45:39.200 13:40:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:39.459 13:40:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:45:39.459 13:40:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:45:39.459 13:40:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:45:39.459 13:40:32 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:45:39.459 13:40:32 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:45:39.459 13:40:32 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:45:39.459 13:40:32 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:45:39.459 13:40:32 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:45:39.718 13:40:32 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=e3f3d1cc-f8a7-48df-bea0-ca3f3a86a216 00:45:39.718 13:40:32 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:45:39.718 13:40:32 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e3f3d1cc-f8a7-48df-bea0-ca3f3a86a216 00:45:39.978 13:40:32 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:45:39.978 13:40:33 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=120fa9f7-5b87-4f84-9af6-da3398e01b9a 00:45:39.978 13:40:33 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 120fa9f7-5b87-4f84-9af6-da3398e01b9a 00:45:40.237 13:40:33 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:40.237 13:40:33 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:45:40.237 13:40:33 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:40.237 13:40:33 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:45:40.237 13:40:33 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:45:40.237 13:40:33 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:40.237 13:40:33 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:45:40.238 13:40:33 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:40.238 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:40.238 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:40.238 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:45:40.238 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:45:40.238 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:40.497 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:40.497 { 00:45:40.497 "name": "cfc51472-1cba-4873-b6c5-20a39ce77f9f", 00:45:40.497 "aliases": [ 00:45:40.497 "lvs/nvme0n1p0" 00:45:40.497 ], 00:45:40.497 "product_name": "Logical Volume", 00:45:40.497 "block_size": 4096, 00:45:40.497 "num_blocks": 26476544, 00:45:40.497 "uuid": "cfc51472-1cba-4873-b6c5-20a39ce77f9f", 00:45:40.497 "assigned_rate_limits": { 00:45:40.497 "rw_ios_per_sec": 0, 00:45:40.497 "rw_mbytes_per_sec": 0, 00:45:40.497 "r_mbytes_per_sec": 0, 00:45:40.497 "w_mbytes_per_sec": 0 00:45:40.497 }, 00:45:40.497 "claimed": false, 00:45:40.497 "zoned": false, 00:45:40.497 "supported_io_types": { 00:45:40.497 "read": true, 00:45:40.497 "write": true, 00:45:40.497 "unmap": true, 00:45:40.497 "flush": false, 00:45:40.497 "reset": true, 00:45:40.497 "nvme_admin": false, 00:45:40.497 "nvme_io": false, 00:45:40.497 "nvme_io_md": false, 00:45:40.497 "write_zeroes": true, 00:45:40.497 "zcopy": false, 00:45:40.497 "get_zone_info": false, 00:45:40.497 "zone_management": false, 00:45:40.497 "zone_append": false, 00:45:40.497 "compare": false, 00:45:40.497 "compare_and_write": false, 00:45:40.497 "abort": false, 00:45:40.497 "seek_hole": true, 00:45:40.497 "seek_data": true, 00:45:40.497 "copy": false, 00:45:40.497 "nvme_iov_md": false 00:45:40.497 }, 00:45:40.497 "driver_specific": { 00:45:40.497 "lvol": { 00:45:40.497 "lvol_store_uuid": "120fa9f7-5b87-4f84-9af6-da3398e01b9a", 00:45:40.497 "base_bdev": "nvme0n1", 00:45:40.497 "thin_provision": true, 00:45:40.497 "num_allocated_clusters": 0, 00:45:40.497 "snapshot": false, 00:45:40.497 "clone": false, 00:45:40.497 "esnap_clone": false 00:45:40.497 } 00:45:40.497 } 00:45:40.497 } 00:45:40.497 ]' 00:45:40.497 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:40.497 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:45:40.497 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:40.497 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:40.497 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:40.497 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:45:40.497 13:40:33 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:45:40.497 13:40:33 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:45:40.497 13:40:33 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:45:41.066 13:40:33 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:45:41.066 13:40:33 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:45:41.066 13:40:33 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:41.066 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:41.066 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:41.066 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:45:41.066 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:45:41.066 13:40:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:41.066 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:41.066 { 00:45:41.066 "name": "cfc51472-1cba-4873-b6c5-20a39ce77f9f", 00:45:41.066 "aliases": [ 00:45:41.066 "lvs/nvme0n1p0" 00:45:41.066 ], 00:45:41.066 "product_name": "Logical Volume", 00:45:41.066 "block_size": 4096, 00:45:41.066 "num_blocks": 26476544, 00:45:41.066 "uuid": "cfc51472-1cba-4873-b6c5-20a39ce77f9f", 00:45:41.066 "assigned_rate_limits": { 00:45:41.066 "rw_ios_per_sec": 0, 00:45:41.066 "rw_mbytes_per_sec": 0, 00:45:41.066 "r_mbytes_per_sec": 0, 00:45:41.066 "w_mbytes_per_sec": 0 00:45:41.066 }, 00:45:41.066 "claimed": false, 00:45:41.066 "zoned": false, 00:45:41.066 "supported_io_types": { 00:45:41.066 "read": true, 00:45:41.066 "write": true, 00:45:41.066 "unmap": true, 00:45:41.066 "flush": false, 00:45:41.066 "reset": true, 00:45:41.066 "nvme_admin": false, 00:45:41.066 "nvme_io": false, 00:45:41.066 "nvme_io_md": false, 00:45:41.066 "write_zeroes": true, 00:45:41.066 "zcopy": false, 00:45:41.066 "get_zone_info": false, 00:45:41.066 "zone_management": false, 00:45:41.066 "zone_append": false, 00:45:41.066 "compare": false, 00:45:41.066 "compare_and_write": false, 00:45:41.066 "abort": false, 00:45:41.066 "seek_hole": true, 00:45:41.066 "seek_data": true, 00:45:41.066 "copy": false, 00:45:41.066 "nvme_iov_md": false 00:45:41.066 }, 00:45:41.066 "driver_specific": { 00:45:41.066 "lvol": { 00:45:41.066 "lvol_store_uuid": "120fa9f7-5b87-4f84-9af6-da3398e01b9a", 00:45:41.066 "base_bdev": "nvme0n1", 00:45:41.066 "thin_provision": true, 00:45:41.066 "num_allocated_clusters": 0, 00:45:41.066 "snapshot": false, 00:45:41.066 "clone": false, 00:45:41.066 "esnap_clone": false 00:45:41.066 } 00:45:41.066 } 00:45:41.066 } 00:45:41.066 ]' 00:45:41.066 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:41.066 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:45:41.067 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:41.326 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:41.326 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:41.326 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:45:41.326 13:40:34 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:45:41.326 13:40:34 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:45:41.586 13:40:34 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:45:41.586 13:40:34 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:41.586 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:41.586 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:45:41.586 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:45:41.586 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:45:41.586 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfc51472-1cba-4873-b6c5-20a39ce77f9f 00:45:41.586 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:45:41.586 { 00:45:41.586 "name": "cfc51472-1cba-4873-b6c5-20a39ce77f9f", 00:45:41.586 "aliases": [ 00:45:41.586 "lvs/nvme0n1p0" 00:45:41.586 ], 00:45:41.586 "product_name": "Logical Volume", 00:45:41.586 "block_size": 4096, 00:45:41.586 "num_blocks": 26476544, 00:45:41.586 "uuid": "cfc51472-1cba-4873-b6c5-20a39ce77f9f", 00:45:41.586 "assigned_rate_limits": { 00:45:41.586 "rw_ios_per_sec": 0, 00:45:41.586 "rw_mbytes_per_sec": 0, 00:45:41.586 "r_mbytes_per_sec": 0, 00:45:41.586 "w_mbytes_per_sec": 0 00:45:41.586 }, 00:45:41.586 "claimed": false, 00:45:41.586 "zoned": false, 00:45:41.586 "supported_io_types": { 00:45:41.586 "read": true, 00:45:41.586 "write": true, 00:45:41.586 "unmap": true, 00:45:41.586 "flush": false, 00:45:41.586 "reset": true, 00:45:41.586 "nvme_admin": false, 00:45:41.586 "nvme_io": false, 00:45:41.586 "nvme_io_md": false, 00:45:41.586 "write_zeroes": true, 00:45:41.586 "zcopy": false, 00:45:41.586 "get_zone_info": false, 00:45:41.586 "zone_management": false, 00:45:41.586 "zone_append": false, 00:45:41.586 "compare": false, 00:45:41.586 "compare_and_write": false, 00:45:41.586 "abort": false, 00:45:41.586 "seek_hole": true, 00:45:41.586 "seek_data": true, 00:45:41.586 "copy": false, 00:45:41.586 "nvme_iov_md": false 00:45:41.586 }, 00:45:41.586 "driver_specific": { 00:45:41.586 "lvol": { 00:45:41.586 "lvol_store_uuid": "120fa9f7-5b87-4f84-9af6-da3398e01b9a", 00:45:41.586 "base_bdev": "nvme0n1", 00:45:41.586 "thin_provision": true, 00:45:41.586 "num_allocated_clusters": 0, 00:45:41.586 "snapshot": false, 00:45:41.586 "clone": false, 00:45:41.586 "esnap_clone": false 00:45:41.586 } 00:45:41.586 } 00:45:41.586 } 00:45:41.586 ]' 00:45:41.586 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:45:41.846 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:45:41.846 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:45:41.846 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:45:41.846 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:45:41.846 13:40:34 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:45:41.846 13:40:34 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:45:41.846 13:40:34 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d cfc51472-1cba-4873-b6c5-20a39ce77f9f --l2p_dram_limit 10' 00:45:41.846 13:40:34 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:45:41.846 13:40:34 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:45:41.846 13:40:34 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:45:41.846 13:40:34 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:45:41.846 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:45:41.846 13:40:34 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cfc51472-1cba-4873-b6c5-20a39ce77f9f --l2p_dram_limit 10 -c nvc0n1p0 00:45:42.107 [2024-12-06 13:40:34.944755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.945344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:42.107 [2024-12-06 13:40:34.945480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:42.107 [2024-12-06 13:40:34.945563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.107 [2024-12-06 13:40:34.945716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.945734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:42.107 [2024-12-06 13:40:34.945750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:45:42.107 [2024-12-06 13:40:34.945761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.107 [2024-12-06 13:40:34.945798] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:42.107 [2024-12-06 13:40:34.947199] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:42.107 [2024-12-06 13:40:34.947303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.947354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:42.107 [2024-12-06 13:40:34.947428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:45:42.107 [2024-12-06 13:40:34.947489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.107 [2024-12-06 13:40:34.947712] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5eb76c83-dca2-44d9-b449-b6a760e65e2b 00:45:42.107 [2024-12-06 13:40:34.950327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.950455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:45:42.107 [2024-12-06 13:40:34.950526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:45:42.107 [2024-12-06 13:40:34.950589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.107 [2024-12-06 13:40:34.965180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.965350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:42.107 [2024-12-06 13:40:34.965445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.471 ms 00:45:42.107 [2024-12-06 13:40:34.965517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.107 [2024-12-06 13:40:34.965687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.965751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:42.107 [2024-12-06 13:40:34.965806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:45:42.107 [2024-12-06 13:40:34.965871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.107 [2024-12-06 13:40:34.965998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.966083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:42.107 [2024-12-06 13:40:34.966154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:45:42.107 [2024-12-06 13:40:34.966219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.107 [2024-12-06 13:40:34.966292] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:42.107 [2024-12-06 13:40:34.972370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.972482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:42.107 [2024-12-06 13:40:34.972545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.085 ms 00:45:42.107 [2024-12-06 13:40:34.972600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.107 [2024-12-06 13:40:34.972679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.972724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:42.107 [2024-12-06 13:40:34.972790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:45:42.107 [2024-12-06 13:40:34.972848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.107 [2024-12-06 13:40:34.972929] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:45:42.107 [2024-12-06 13:40:34.973131] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:42.107 [2024-12-06 13:40:34.973222] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:42.107 [2024-12-06 13:40:34.973280] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:42.107 [2024-12-06 13:40:34.973302] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:42.107 [2024-12-06 13:40:34.973315] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:42.107 [2024-12-06 13:40:34.973331] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:42.107 [2024-12-06 13:40:34.973341] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:42.107 [2024-12-06 13:40:34.973362] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:42.107 [2024-12-06 13:40:34.973372] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:42.107 [2024-12-06 13:40:34.973387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.973421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:42.107 [2024-12-06 13:40:34.973437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:45:42.107 [2024-12-06 13:40:34.973448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.107 [2024-12-06 13:40:34.973536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.973548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:42.107 [2024-12-06 13:40:34.973562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:45:42.107 [2024-12-06 13:40:34.973572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.107 [2024-12-06 13:40:34.973674] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:42.107 [2024-12-06 13:40:34.973700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:42.107 [2024-12-06 13:40:34.973716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:42.107 [2024-12-06 13:40:34.973727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.107 [2024-12-06 13:40:34.973742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:42.107 [2024-12-06 13:40:34.973751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:42.107 [2024-12-06 13:40:34.973765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:42.107 [2024-12-06 13:40:34.973774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:42.107 [2024-12-06 13:40:34.973788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:42.107 [2024-12-06 13:40:34.973798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:42.107 [2024-12-06 13:40:34.973811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:42.107 [2024-12-06 13:40:34.973821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:42.107 [2024-12-06 13:40:34.973835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:42.107 [2024-12-06 13:40:34.973845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:42.107 [2024-12-06 13:40:34.973859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:45:42.107 [2024-12-06 13:40:34.973868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.107 [2024-12-06 13:40:34.973883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:42.107 [2024-12-06 13:40:34.973893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:45:42.107 [2024-12-06 13:40:34.973909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.107 [2024-12-06 13:40:34.973919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:42.107 [2024-12-06 13:40:34.973931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:42.107 [2024-12-06 13:40:34.973940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:42.107 [2024-12-06 13:40:34.973953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:42.107 [2024-12-06 13:40:34.973962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:42.107 [2024-12-06 13:40:34.973976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:42.107 [2024-12-06 13:40:34.973985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:42.107 [2024-12-06 13:40:34.973998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:42.107 [2024-12-06 13:40:34.974007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:42.107 [2024-12-06 13:40:34.974020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:42.107 [2024-12-06 13:40:34.974029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:45:42.107 [2024-12-06 13:40:34.974042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:42.107 [2024-12-06 13:40:34.974052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:42.107 [2024-12-06 13:40:34.974067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:45:42.107 [2024-12-06 13:40:34.974077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:42.107 [2024-12-06 13:40:34.974089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:42.107 [2024-12-06 13:40:34.974099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:45:42.107 [2024-12-06 13:40:34.974111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:42.107 [2024-12-06 13:40:34.974120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:42.107 [2024-12-06 13:40:34.974135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:45:42.107 [2024-12-06 13:40:34.974144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.107 [2024-12-06 13:40:34.974156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:42.107 [2024-12-06 13:40:34.974166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:45:42.107 [2024-12-06 13:40:34.974178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.107 [2024-12-06 13:40:34.974187] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:42.107 [2024-12-06 13:40:34.974200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:42.107 [2024-12-06 13:40:34.974210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:42.107 [2024-12-06 13:40:34.974225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:42.107 [2024-12-06 13:40:34.974235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:42.107 [2024-12-06 13:40:34.974251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:42.107 [2024-12-06 13:40:34.974260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:42.107 [2024-12-06 13:40:34.974274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:42.107 [2024-12-06 13:40:34.974284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:42.107 [2024-12-06 13:40:34.974298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:42.107 [2024-12-06 13:40:34.974311] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:42.107 [2024-12-06 13:40:34.974332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:42.107 [2024-12-06 13:40:34.974344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:42.107 [2024-12-06 13:40:34.974358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:45:42.107 [2024-12-06 13:40:34.974369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:45:42.107 [2024-12-06 13:40:34.974384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:45:42.107 [2024-12-06 13:40:34.974406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:45:42.107 [2024-12-06 13:40:34.974420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:45:42.107 [2024-12-06 13:40:34.974431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:45:42.107 [2024-12-06 13:40:34.974445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:45:42.107 [2024-12-06 13:40:34.974456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:45:42.107 [2024-12-06 13:40:34.974474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:45:42.107 [2024-12-06 13:40:34.974485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:45:42.107 [2024-12-06 13:40:34.974499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:45:42.107 [2024-12-06 13:40:34.974510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:45:42.107 [2024-12-06 13:40:34.974524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:45:42.107 [2024-12-06 13:40:34.974534] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:42.107 [2024-12-06 13:40:34.974549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:42.107 [2024-12-06 13:40:34.974570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:42.107 [2024-12-06 13:40:34.974584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:42.107 [2024-12-06 13:40:34.974595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:42.107 [2024-12-06 13:40:34.974610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:42.107 [2024-12-06 13:40:34.974622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:42.107 [2024-12-06 13:40:34.974635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:42.108 [2024-12-06 13:40:34.974646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:45:42.108 [2024-12-06 13:40:34.974660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:42.108 [2024-12-06 13:40:34.974709] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:45:42.108 [2024-12-06 13:40:34.974730] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:45:44.637 [2024-12-06 13:40:37.596332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.637 [2024-12-06 13:40:37.596417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:45:44.637 [2024-12-06 13:40:37.596437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2621.605 ms 00:45:44.637 [2024-12-06 13:40:37.596452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.637 [2024-12-06 13:40:37.647975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.637 [2024-12-06 13:40:37.648041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:44.637 [2024-12-06 13:40:37.648058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.184 ms 00:45:44.637 [2024-12-06 13:40:37.648073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.637 [2024-12-06 13:40:37.648264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.637 [2024-12-06 13:40:37.648282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:44.637 [2024-12-06 13:40:37.648294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:45:44.637 [2024-12-06 13:40:37.648318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.637 [2024-12-06 13:40:37.702283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.637 [2024-12-06 13:40:37.702356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:44.637 [2024-12-06 13:40:37.702372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.890 ms 00:45:44.637 [2024-12-06 13:40:37.702387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.637 [2024-12-06 13:40:37.702455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.637 [2024-12-06 13:40:37.702478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:44.637 [2024-12-06 13:40:37.702490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:44.637 [2024-12-06 13:40:37.702517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.637 [2024-12-06 13:40:37.703357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.637 [2024-12-06 13:40:37.703376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:44.637 [2024-12-06 13:40:37.703387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:45:44.637 [2024-12-06 13:40:37.703400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.637 [2024-12-06 13:40:37.703542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.637 [2024-12-06 13:40:37.703565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:44.637 [2024-12-06 13:40:37.703581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:45:44.637 [2024-12-06 13:40:37.703615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.637 [2024-12-06 13:40:37.729671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.637 [2024-12-06 13:40:37.729728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:44.637 [2024-12-06 13:40:37.729744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.032 ms 00:45:44.637 [2024-12-06 13:40:37.729758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.895 [2024-12-06 13:40:37.756150] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:45:44.895 [2024-12-06 13:40:37.761518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.895 [2024-12-06 13:40:37.761549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:44.895 [2024-12-06 13:40:37.761568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.634 ms 00:45:44.895 [2024-12-06 13:40:37.761580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.895 [2024-12-06 13:40:37.844889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.895 [2024-12-06 13:40:37.844967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:45:44.895 [2024-12-06 13:40:37.845006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.258 ms 00:45:44.895 [2024-12-06 13:40:37.845018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.895 [2024-12-06 13:40:37.845239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.895 [2024-12-06 13:40:37.845258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:44.895 [2024-12-06 13:40:37.845278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:45:44.895 [2024-12-06 13:40:37.845290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.895 [2024-12-06 13:40:37.881856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.895 [2024-12-06 13:40:37.881889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:45:44.895 [2024-12-06 13:40:37.881906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.506 ms 00:45:44.895 [2024-12-06 13:40:37.881917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.895 [2024-12-06 13:40:37.917518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.895 [2024-12-06 13:40:37.917551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:45:44.895 [2024-12-06 13:40:37.917569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.548 ms 00:45:44.895 [2024-12-06 13:40:37.917580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:44.895 [2024-12-06 13:40:37.918338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:44.895 [2024-12-06 13:40:37.918359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:44.896 [2024-12-06 13:40:37.918374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:45:44.896 [2024-12-06 13:40:37.918388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.154 [2024-12-06 13:40:38.017595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.154 [2024-12-06 13:40:38.017643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:45:45.154 [2024-12-06 13:40:38.017668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.137 ms 00:45:45.154 [2024-12-06 13:40:38.017680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.154 [2024-12-06 13:40:38.056498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.154 [2024-12-06 13:40:38.056537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:45:45.154 [2024-12-06 13:40:38.056572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.747 ms 00:45:45.154 [2024-12-06 13:40:38.056583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.154 [2024-12-06 13:40:38.092587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.154 [2024-12-06 13:40:38.092623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:45:45.154 [2024-12-06 13:40:38.092640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.957 ms 00:45:45.154 [2024-12-06 13:40:38.092650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.154 [2024-12-06 13:40:38.129104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.154 [2024-12-06 13:40:38.129136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:45.154 [2024-12-06 13:40:38.129153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.423 ms 00:45:45.154 [2024-12-06 13:40:38.129163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.154 [2024-12-06 13:40:38.129197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.154 [2024-12-06 13:40:38.129208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:45.154 [2024-12-06 13:40:38.129227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:45.154 [2024-12-06 13:40:38.129237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.154 [2024-12-06 13:40:38.129342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.154 [2024-12-06 13:40:38.129358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:45.154 [2024-12-06 13:40:38.129371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:45:45.154 [2024-12-06 13:40:38.129380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.154 [2024-12-06 13:40:38.131164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3185.635 ms, result 0 00:45:45.154 { 00:45:45.154 "name": "ftl0", 00:45:45.154 "uuid": "5eb76c83-dca2-44d9-b449-b6a760e65e2b" 00:45:45.154 } 00:45:45.154 13:40:38 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:45:45.154 13:40:38 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:45:45.413 13:40:38 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:45:45.413 13:40:38 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:45:45.671 [2024-12-06 13:40:38.681960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.671 [2024-12-06 13:40:38.682047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:45.671 [2024-12-06 13:40:38.682066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:45.671 [2024-12-06 13:40:38.682081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.671 [2024-12-06 13:40:38.682111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:45.671 [2024-12-06 13:40:38.686994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.671 [2024-12-06 13:40:38.687025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:45.671 [2024-12-06 13:40:38.687041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.856 ms 00:45:45.671 [2024-12-06 13:40:38.687053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.671 [2024-12-06 13:40:38.687339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.671 [2024-12-06 13:40:38.687362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:45.672 [2024-12-06 13:40:38.687377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:45:45.672 [2024-12-06 13:40:38.687389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.672 [2024-12-06 13:40:38.690123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.672 [2024-12-06 13:40:38.690144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:45.672 [2024-12-06 13:40:38.690159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:45:45.672 [2024-12-06 13:40:38.690170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.672 [2024-12-06 13:40:38.695348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.672 [2024-12-06 13:40:38.695377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:45.672 [2024-12-06 13:40:38.695406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.153 ms 00:45:45.672 [2024-12-06 13:40:38.695418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.672 [2024-12-06 13:40:38.735773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.672 [2024-12-06 13:40:38.735839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:45.672 [2024-12-06 13:40:38.735860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.274 ms 00:45:45.672 [2024-12-06 13:40:38.735872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.672 [2024-12-06 13:40:38.759237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.672 [2024-12-06 13:40:38.759284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:45.672 [2024-12-06 13:40:38.759304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.304 ms 00:45:45.672 [2024-12-06 13:40:38.759316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.672 [2024-12-06 13:40:38.759512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.672 [2024-12-06 13:40:38.759528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:45.672 [2024-12-06 13:40:38.759544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:45:45.672 [2024-12-06 13:40:38.759563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.932 [2024-12-06 13:40:38.798944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.932 [2024-12-06 13:40:38.798998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:45.932 [2024-12-06 13:40:38.799018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.346 ms 00:45:45.932 [2024-12-06 13:40:38.799030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.932 [2024-12-06 13:40:38.838804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.932 [2024-12-06 13:40:38.838896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:45.932 [2024-12-06 13:40:38.838917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.714 ms 00:45:45.932 [2024-12-06 13:40:38.838929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.932 [2024-12-06 13:40:38.877007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.932 [2024-12-06 13:40:38.877055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:45.932 [2024-12-06 13:40:38.877074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.015 ms 00:45:45.932 [2024-12-06 13:40:38.877085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.932 [2024-12-06 13:40:38.913602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.932 [2024-12-06 13:40:38.913642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:45.932 [2024-12-06 13:40:38.913660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.399 ms 00:45:45.932 [2024-12-06 13:40:38.913670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.932 [2024-12-06 13:40:38.913716] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:45.932 [2024-12-06 13:40:38.913736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.913999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:45.932 [2024-12-06 13:40:38.914720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.914989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.915005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.915020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.915031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.915044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.915055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.915072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.915083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.915099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.915110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.915124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:45.933 [2024-12-06 13:40:38.915142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:45.933 [2024-12-06 13:40:38.915157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5eb76c83-dca2-44d9-b449-b6a760e65e2b 00:45:45.933 [2024-12-06 13:40:38.915169] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:45:45.933 [2024-12-06 13:40:38.915187] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:45.933 [2024-12-06 13:40:38.915202] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:45.933 [2024-12-06 13:40:38.915216] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:45.933 [2024-12-06 13:40:38.915227] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:45.933 [2024-12-06 13:40:38.915241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:45.933 [2024-12-06 13:40:38.915251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:45.933 [2024-12-06 13:40:38.915264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:45.933 [2024-12-06 13:40:38.915273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:45.933 [2024-12-06 13:40:38.915286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.933 [2024-12-06 13:40:38.915297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:45.933 [2024-12-06 13:40:38.915311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:45:45.933 [2024-12-06 13:40:38.915325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.933 [2024-12-06 13:40:38.937162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.933 [2024-12-06 13:40:38.937197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:45.933 [2024-12-06 13:40:38.937214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.778 ms 00:45:45.933 [2024-12-06 13:40:38.937225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.933 [2024-12-06 13:40:38.937857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:45.933 [2024-12-06 13:40:38.937873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:45.933 [2024-12-06 13:40:38.937892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:45:45.933 [2024-12-06 13:40:38.937903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.933 [2024-12-06 13:40:39.008536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:45.933 [2024-12-06 13:40:39.008597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:45.933 [2024-12-06 13:40:39.008616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:45.933 [2024-12-06 13:40:39.008628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.933 [2024-12-06 13:40:39.008718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:45.933 [2024-12-06 13:40:39.008731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:45.933 [2024-12-06 13:40:39.008750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:45.933 [2024-12-06 13:40:39.008761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.933 [2024-12-06 13:40:39.008904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:45.933 [2024-12-06 13:40:39.008920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:45.933 [2024-12-06 13:40:39.008934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:45.933 [2024-12-06 13:40:39.008946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:45.933 [2024-12-06 13:40:39.008976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:45.933 [2024-12-06 13:40:39.008987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:45.933 [2024-12-06 13:40:39.009002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:45.933 [2024-12-06 13:40:39.009016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.192 [2024-12-06 13:40:39.148470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:46.192 [2024-12-06 13:40:39.148569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:46.192 [2024-12-06 13:40:39.148592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:46.192 [2024-12-06 13:40:39.148604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.192 [2024-12-06 13:40:39.258310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:46.192 [2024-12-06 13:40:39.258382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:46.192 [2024-12-06 13:40:39.258415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:46.192 [2024-12-06 13:40:39.258450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.192 [2024-12-06 13:40:39.258619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:46.192 [2024-12-06 13:40:39.258634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:46.192 [2024-12-06 13:40:39.258649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:46.192 [2024-12-06 13:40:39.258659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.192 [2024-12-06 13:40:39.258735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:46.192 [2024-12-06 13:40:39.258749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:46.192 [2024-12-06 13:40:39.258764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:46.192 [2024-12-06 13:40:39.258775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.192 [2024-12-06 13:40:39.258928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:46.192 [2024-12-06 13:40:39.258942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:46.192 [2024-12-06 13:40:39.258957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:46.192 [2024-12-06 13:40:39.258968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.192 [2024-12-06 13:40:39.259016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:46.192 [2024-12-06 13:40:39.259030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:46.192 [2024-12-06 13:40:39.259044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:46.192 [2024-12-06 13:40:39.259056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.192 [2024-12-06 13:40:39.259114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:46.192 [2024-12-06 13:40:39.259126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:46.192 [2024-12-06 13:40:39.259140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:46.192 [2024-12-06 13:40:39.259151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.192 [2024-12-06 13:40:39.259212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:46.192 [2024-12-06 13:40:39.259225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:46.192 [2024-12-06 13:40:39.259240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:46.192 [2024-12-06 13:40:39.259250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:46.192 [2024-12-06 13:40:39.259417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 577.413 ms, result 0 00:45:46.192 true 00:45:46.192 13:40:39 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79985 00:45:46.192 13:40:39 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79985 ']' 00:45:46.192 13:40:39 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79985 00:45:46.192 13:40:39 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:45:46.451 13:40:39 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:46.451 13:40:39 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79985 00:45:46.451 13:40:39 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:46.451 13:40:39 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:46.451 killing process with pid 79985 00:45:46.451 13:40:39 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79985' 00:45:46.451 13:40:39 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79985 00:45:46.451 13:40:39 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79985 00:45:51.739 13:40:43 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:45:55.927 262144+0 records in 00:45:55.927 262144+0 records out 00:45:55.927 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.4784 s, 240 MB/s 00:45:55.927 13:40:48 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:45:57.303 13:40:50 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:57.303 [2024-12-06 13:40:50.334833] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:45:57.303 [2024-12-06 13:40:50.335057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80228 ] 00:45:57.562 [2024-12-06 13:40:50.522968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:57.822 [2024-12-06 13:40:50.669781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:58.080 [2024-12-06 13:40:51.108620] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:58.080 [2024-12-06 13:40:51.108731] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:58.340 [2024-12-06 13:40:51.277368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.340 [2024-12-06 13:40:51.277442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:58.340 [2024-12-06 13:40:51.277462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:58.340 [2024-12-06 13:40:51.277474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.340 [2024-12-06 13:40:51.277526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.340 [2024-12-06 13:40:51.277543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:58.340 [2024-12-06 13:40:51.277554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:45:58.340 [2024-12-06 13:40:51.277566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.340 [2024-12-06 13:40:51.277588] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:58.340 [2024-12-06 13:40:51.278568] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:58.340 [2024-12-06 13:40:51.278597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.340 [2024-12-06 13:40:51.278609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:58.340 [2024-12-06 13:40:51.278621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:45:58.340 [2024-12-06 13:40:51.278632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.340 [2024-12-06 13:40:51.281124] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:58.340 [2024-12-06 13:40:51.302439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.340 [2024-12-06 13:40:51.302477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:58.340 [2024-12-06 13:40:51.302510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.315 ms 00:45:58.340 [2024-12-06 13:40:51.302522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.340 [2024-12-06 13:40:51.302615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.340 [2024-12-06 13:40:51.302629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:58.340 [2024-12-06 13:40:51.302641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:45:58.340 [2024-12-06 13:40:51.302652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.340 [2024-12-06 13:40:51.315644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.340 [2024-12-06 13:40:51.315679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:58.340 [2024-12-06 13:40:51.315695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.909 ms 00:45:58.340 [2024-12-06 13:40:51.315716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.340 [2024-12-06 13:40:51.315840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.340 [2024-12-06 13:40:51.315855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:58.340 [2024-12-06 13:40:51.315867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:45:58.340 [2024-12-06 13:40:51.315877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.340 [2024-12-06 13:40:51.315944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.340 [2024-12-06 13:40:51.315957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:58.340 [2024-12-06 13:40:51.315968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:45:58.340 [2024-12-06 13:40:51.315978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.340 [2024-12-06 13:40:51.316016] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:58.340 [2024-12-06 13:40:51.322043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.341 [2024-12-06 13:40:51.322075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:58.341 [2024-12-06 13:40:51.322093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.036 ms 00:45:58.341 [2024-12-06 13:40:51.322119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.341 [2024-12-06 13:40:51.322157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.341 [2024-12-06 13:40:51.322168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:58.341 [2024-12-06 13:40:51.322180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:58.341 [2024-12-06 13:40:51.322190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.341 [2024-12-06 13:40:51.322232] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:58.341 [2024-12-06 13:40:51.322261] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:58.341 [2024-12-06 13:40:51.322301] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:58.341 [2024-12-06 13:40:51.322325] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:58.341 [2024-12-06 13:40:51.322433] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:58.341 [2024-12-06 13:40:51.322449] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:58.341 [2024-12-06 13:40:51.322465] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:58.341 [2024-12-06 13:40:51.322479] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:58.341 [2024-12-06 13:40:51.322492] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:58.341 [2024-12-06 13:40:51.322505] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:58.341 [2024-12-06 13:40:51.322516] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:58.341 [2024-12-06 13:40:51.322531] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:58.341 [2024-12-06 13:40:51.322542] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:58.341 [2024-12-06 13:40:51.322552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.341 [2024-12-06 13:40:51.322563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:58.341 [2024-12-06 13:40:51.322574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:45:58.341 [2024-12-06 13:40:51.322584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.341 [2024-12-06 13:40:51.322659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.341 [2024-12-06 13:40:51.322670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:58.341 [2024-12-06 13:40:51.322681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:45:58.341 [2024-12-06 13:40:51.322691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.341 [2024-12-06 13:40:51.322790] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:58.341 [2024-12-06 13:40:51.322803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:58.341 [2024-12-06 13:40:51.322815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:58.341 [2024-12-06 13:40:51.322826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.341 [2024-12-06 13:40:51.322837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:58.341 [2024-12-06 13:40:51.322847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:58.341 [2024-12-06 13:40:51.322857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:58.341 [2024-12-06 13:40:51.322867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:58.341 [2024-12-06 13:40:51.322876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:58.341 [2024-12-06 13:40:51.322886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:58.341 [2024-12-06 13:40:51.322898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:58.341 [2024-12-06 13:40:51.322908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:58.341 [2024-12-06 13:40:51.322918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:58.341 [2024-12-06 13:40:51.322941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:58.341 [2024-12-06 13:40:51.322952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:45:58.341 [2024-12-06 13:40:51.322962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.341 [2024-12-06 13:40:51.322971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:58.341 [2024-12-06 13:40:51.322981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:45:58.341 [2024-12-06 13:40:51.322991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.341 [2024-12-06 13:40:51.323001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:58.341 [2024-12-06 13:40:51.323011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:58.341 [2024-12-06 13:40:51.323021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:58.341 [2024-12-06 13:40:51.323030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:58.341 [2024-12-06 13:40:51.323040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:58.341 [2024-12-06 13:40:51.323049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:58.341 [2024-12-06 13:40:51.323059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:58.341 [2024-12-06 13:40:51.323068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:58.341 [2024-12-06 13:40:51.323077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:58.341 [2024-12-06 13:40:51.323087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:58.341 [2024-12-06 13:40:51.323098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:45:58.341 [2024-12-06 13:40:51.323107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:58.341 [2024-12-06 13:40:51.323116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:58.341 [2024-12-06 13:40:51.323126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:45:58.341 [2024-12-06 13:40:51.323135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:58.341 [2024-12-06 13:40:51.323144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:58.341 [2024-12-06 13:40:51.323153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:45:58.341 [2024-12-06 13:40:51.323162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:58.341 [2024-12-06 13:40:51.323171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:58.341 [2024-12-06 13:40:51.323181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:45:58.341 [2024-12-06 13:40:51.323190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.341 [2024-12-06 13:40:51.323199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:58.341 [2024-12-06 13:40:51.323208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:45:58.341 [2024-12-06 13:40:51.323218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.341 [2024-12-06 13:40:51.323228] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:58.341 [2024-12-06 13:40:51.323239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:58.341 [2024-12-06 13:40:51.323249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:58.341 [2024-12-06 13:40:51.323259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:58.341 [2024-12-06 13:40:51.323269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:58.341 [2024-12-06 13:40:51.323279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:58.341 [2024-12-06 13:40:51.323289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:58.341 [2024-12-06 13:40:51.323299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:58.341 [2024-12-06 13:40:51.323309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:58.341 [2024-12-06 13:40:51.323319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:58.341 [2024-12-06 13:40:51.323331] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:58.341 [2024-12-06 13:40:51.323343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:58.341 [2024-12-06 13:40:51.323359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:58.341 [2024-12-06 13:40:51.323371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:45:58.341 [2024-12-06 13:40:51.323381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:45:58.341 [2024-12-06 13:40:51.323392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:45:58.341 [2024-12-06 13:40:51.323413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:45:58.341 [2024-12-06 13:40:51.323425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:45:58.341 [2024-12-06 13:40:51.323436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:45:58.341 [2024-12-06 13:40:51.323447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:45:58.341 [2024-12-06 13:40:51.323458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:45:58.341 [2024-12-06 13:40:51.323469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:45:58.341 [2024-12-06 13:40:51.323480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:45:58.341 [2024-12-06 13:40:51.323491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:45:58.341 [2024-12-06 13:40:51.323502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:45:58.341 [2024-12-06 13:40:51.323513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:45:58.341 [2024-12-06 13:40:51.323524] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:58.341 [2024-12-06 13:40:51.323536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:58.342 [2024-12-06 13:40:51.323558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:58.342 [2024-12-06 13:40:51.323570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:58.342 [2024-12-06 13:40:51.323580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:58.342 [2024-12-06 13:40:51.323592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:58.342 [2024-12-06 13:40:51.323604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.342 [2024-12-06 13:40:51.323616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:58.342 [2024-12-06 13:40:51.323627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:45:58.342 [2024-12-06 13:40:51.323637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.342 [2024-12-06 13:40:51.374996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.342 [2024-12-06 13:40:51.375065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:58.342 [2024-12-06 13:40:51.375082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.303 ms 00:45:58.342 [2024-12-06 13:40:51.375100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.342 [2024-12-06 13:40:51.375207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.342 [2024-12-06 13:40:51.375219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:58.342 [2024-12-06 13:40:51.375230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:45:58.342 [2024-12-06 13:40:51.375242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.444375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.444455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:58.602 [2024-12-06 13:40:51.444473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.006 ms 00:45:58.602 [2024-12-06 13:40:51.444485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.444550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.444571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:58.602 [2024-12-06 13:40:51.444583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:45:58.602 [2024-12-06 13:40:51.444594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.445455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.445484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:58.602 [2024-12-06 13:40:51.445495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:45:58.602 [2024-12-06 13:40:51.445506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.445653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.445668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:58.602 [2024-12-06 13:40:51.445687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:45:58.602 [2024-12-06 13:40:51.445697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.470306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.470357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:58.602 [2024-12-06 13:40:51.470375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.585 ms 00:45:58.602 [2024-12-06 13:40:51.470402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.490770] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:45:58.602 [2024-12-06 13:40:51.490808] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:58.602 [2024-12-06 13:40:51.490840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.490852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:58.602 [2024-12-06 13:40:51.490865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.276 ms 00:45:58.602 [2024-12-06 13:40:51.490874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.520849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.520905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:58.602 [2024-12-06 13:40:51.520919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.929 ms 00:45:58.602 [2024-12-06 13:40:51.520930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.539624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.539663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:58.602 [2024-12-06 13:40:51.539677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.627 ms 00:45:58.602 [2024-12-06 13:40:51.539687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.557623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.557656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:58.602 [2024-12-06 13:40:51.557669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.892 ms 00:45:58.602 [2024-12-06 13:40:51.557679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.558504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.558534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:58.602 [2024-12-06 13:40:51.558548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:45:58.602 [2024-12-06 13:40:51.558563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.657819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.657881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:58.602 [2024-12-06 13:40:51.657901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.229 ms 00:45:58.602 [2024-12-06 13:40:51.657920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.670333] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:45:58.602 [2024-12-06 13:40:51.675424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.675459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:58.602 [2024-12-06 13:40:51.675476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.440 ms 00:45:58.602 [2024-12-06 13:40:51.675488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.675619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.675634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:58.602 [2024-12-06 13:40:51.675647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:45:58.602 [2024-12-06 13:40:51.675658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.675760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.602 [2024-12-06 13:40:51.675773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:58.602 [2024-12-06 13:40:51.675785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:45:58.602 [2024-12-06 13:40:51.675796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.602 [2024-12-06 13:40:51.675823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.603 [2024-12-06 13:40:51.675835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:58.603 [2024-12-06 13:40:51.675863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:58.603 [2024-12-06 13:40:51.675874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.603 [2024-12-06 13:40:51.675918] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:58.603 [2024-12-06 13:40:51.675937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.603 [2024-12-06 13:40:51.675948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:58.603 [2024-12-06 13:40:51.675959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:45:58.603 [2024-12-06 13:40:51.675970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.861 [2024-12-06 13:40:51.715198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.861 [2024-12-06 13:40:51.715242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:58.861 [2024-12-06 13:40:51.715274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.203 ms 00:45:58.861 [2024-12-06 13:40:51.715293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.861 [2024-12-06 13:40:51.715372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:58.861 [2024-12-06 13:40:51.715385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:58.861 [2024-12-06 13:40:51.715406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:45:58.861 [2024-12-06 13:40:51.715417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:58.861 [2024-12-06 13:40:51.716992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 439.042 ms, result 0 00:45:59.795  [2024-12-06T13:40:53.830Z] Copying: 29/1024 [MB] (29 MBps) [2024-12-06T13:40:54.766Z] Copying: 60/1024 [MB] (30 MBps) [2024-12-06T13:40:56.139Z] Copying: 90/1024 [MB] (30 MBps) [2024-12-06T13:40:57.074Z] Copying: 120/1024 [MB] (30 MBps) [2024-12-06T13:40:58.011Z] Copying: 151/1024 [MB] (31 MBps) [2024-12-06T13:40:58.968Z] Copying: 183/1024 [MB] (31 MBps) [2024-12-06T13:40:59.914Z] Copying: 213/1024 [MB] (30 MBps) [2024-12-06T13:41:00.849Z] Copying: 245/1024 [MB] (31 MBps) [2024-12-06T13:41:01.785Z] Copying: 276/1024 [MB] (30 MBps) [2024-12-06T13:41:03.162Z] Copying: 307/1024 [MB] (31 MBps) [2024-12-06T13:41:03.730Z] Copying: 339/1024 [MB] (31 MBps) [2024-12-06T13:41:05.108Z] Copying: 371/1024 [MB] (31 MBps) [2024-12-06T13:41:06.041Z] Copying: 402/1024 [MB] (31 MBps) [2024-12-06T13:41:06.977Z] Copying: 433/1024 [MB] (31 MBps) [2024-12-06T13:41:07.914Z] Copying: 464/1024 [MB] (30 MBps) [2024-12-06T13:41:08.851Z] Copying: 495/1024 [MB] (31 MBps) [2024-12-06T13:41:09.788Z] Copying: 526/1024 [MB] (30 MBps) [2024-12-06T13:41:11.164Z] Copying: 557/1024 [MB] (30 MBps) [2024-12-06T13:41:11.731Z] Copying: 588/1024 [MB] (30 MBps) [2024-12-06T13:41:13.107Z] Copying: 618/1024 [MB] (30 MBps) [2024-12-06T13:41:14.040Z] Copying: 647/1024 [MB] (28 MBps) [2024-12-06T13:41:14.971Z] Copying: 675/1024 [MB] (28 MBps) [2024-12-06T13:41:15.909Z] Copying: 705/1024 [MB] (29 MBps) [2024-12-06T13:41:16.844Z] Copying: 734/1024 [MB] (29 MBps) [2024-12-06T13:41:17.781Z] Copying: 763/1024 [MB] (28 MBps) [2024-12-06T13:41:19.154Z] Copying: 792/1024 [MB] (28 MBps) [2024-12-06T13:41:20.090Z] Copying: 821/1024 [MB] (29 MBps) [2024-12-06T13:41:21.027Z] Copying: 848/1024 [MB] (26 MBps) [2024-12-06T13:41:21.965Z] Copying: 877/1024 [MB] (28 MBps) [2024-12-06T13:41:22.903Z] Copying: 906/1024 [MB] (29 MBps) [2024-12-06T13:41:23.839Z] Copying: 935/1024 [MB] (29 MBps) [2024-12-06T13:41:24.784Z] Copying: 965/1024 [MB] (29 MBps) [2024-12-06T13:41:26.161Z] Copying: 993/1024 [MB] (28 MBps) [2024-12-06T13:41:26.161Z] Copying: 1023/1024 [MB] (29 MBps) [2024-12-06T13:41:26.161Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-12-06 13:41:25.748308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.061 [2024-12-06 13:41:25.748368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:33.061 [2024-12-06 13:41:25.748391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:33.061 [2024-12-06 13:41:25.748419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.061 [2024-12-06 13:41:25.748448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:33.061 [2024-12-06 13:41:25.753460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.061 [2024-12-06 13:41:25.753500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:33.061 [2024-12-06 13:41:25.753521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.989 ms 00:46:33.061 [2024-12-06 13:41:25.753532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.061 [2024-12-06 13:41:25.755517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.061 [2024-12-06 13:41:25.755565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:33.061 [2024-12-06 13:41:25.755579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.956 ms 00:46:33.061 [2024-12-06 13:41:25.755591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.061 [2024-12-06 13:41:25.770142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.061 [2024-12-06 13:41:25.770193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:33.061 [2024-12-06 13:41:25.770224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.532 ms 00:46:33.061 [2024-12-06 13:41:25.770235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.061 [2024-12-06 13:41:25.775198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.061 [2024-12-06 13:41:25.775228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:33.061 [2024-12-06 13:41:25.775240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:46:33.061 [2024-12-06 13:41:25.775250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.061 [2024-12-06 13:41:25.812227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.061 [2024-12-06 13:41:25.812265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:33.061 [2024-12-06 13:41:25.812279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.900 ms 00:46:33.061 [2024-12-06 13:41:25.812289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.061 [2024-12-06 13:41:25.832614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.062 [2024-12-06 13:41:25.832652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:33.062 [2024-12-06 13:41:25.832667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.288 ms 00:46:33.062 [2024-12-06 13:41:25.832678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:25.832813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.062 [2024-12-06 13:41:25.832831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:33.062 [2024-12-06 13:41:25.832843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:46:33.062 [2024-12-06 13:41:25.832852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:25.869228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.062 [2024-12-06 13:41:25.869276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:33.062 [2024-12-06 13:41:25.869305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.358 ms 00:46:33.062 [2024-12-06 13:41:25.869316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:25.904861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.062 [2024-12-06 13:41:25.904898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:33.062 [2024-12-06 13:41:25.904912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.508 ms 00:46:33.062 [2024-12-06 13:41:25.904922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:25.939885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.062 [2024-12-06 13:41:25.939923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:33.062 [2024-12-06 13:41:25.939936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.926 ms 00:46:33.062 [2024-12-06 13:41:25.939946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:25.975306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.062 [2024-12-06 13:41:25.975350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:33.062 [2024-12-06 13:41:25.975363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.282 ms 00:46:33.062 [2024-12-06 13:41:25.975373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:25.975433] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:33.062 [2024-12-06 13:41:25.975452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.975999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:33.062 [2024-12-06 13:41:25.976610] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:33.062 [2024-12-06 13:41:25.976624] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5eb76c83-dca2-44d9-b449-b6a760e65e2b 00:46:33.062 [2024-12-06 13:41:25.976636] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:33.062 [2024-12-06 13:41:25.976647] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:33.062 [2024-12-06 13:41:25.976657] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:33.062 [2024-12-06 13:41:25.976667] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:33.062 [2024-12-06 13:41:25.976677] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:33.062 [2024-12-06 13:41:25.976710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:33.062 [2024-12-06 13:41:25.976721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:33.062 [2024-12-06 13:41:25.976730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:33.062 [2024-12-06 13:41:25.976740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:33.062 [2024-12-06 13:41:25.976750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.062 [2024-12-06 13:41:25.976760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:33.062 [2024-12-06 13:41:25.976771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.319 ms 00:46:33.062 [2024-12-06 13:41:25.976782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:25.998136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.062 [2024-12-06 13:41:25.998167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:33.062 [2024-12-06 13:41:25.998197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.315 ms 00:46:33.062 [2024-12-06 13:41:25.998208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:25.998782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:33.062 [2024-12-06 13:41:25.998806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:33.062 [2024-12-06 13:41:25.998818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:46:33.062 [2024-12-06 13:41:25.998835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:26.053066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.062 [2024-12-06 13:41:26.053103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:33.062 [2024-12-06 13:41:26.053117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.062 [2024-12-06 13:41:26.053145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:26.053209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.062 [2024-12-06 13:41:26.053222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:33.062 [2024-12-06 13:41:26.053232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.062 [2024-12-06 13:41:26.053249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:26.053337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.062 [2024-12-06 13:41:26.053352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:33.062 [2024-12-06 13:41:26.053363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.062 [2024-12-06 13:41:26.053373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.062 [2024-12-06 13:41:26.053391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.062 [2024-12-06 13:41:26.053403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:33.062 [2024-12-06 13:41:26.053424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.062 [2024-12-06 13:41:26.053435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.321 [2024-12-06 13:41:26.191760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.321 [2024-12-06 13:41:26.191848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:33.321 [2024-12-06 13:41:26.191869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.321 [2024-12-06 13:41:26.191882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.321 [2024-12-06 13:41:26.300179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.321 [2024-12-06 13:41:26.300261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:33.321 [2024-12-06 13:41:26.300295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.321 [2024-12-06 13:41:26.300314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.321 [2024-12-06 13:41:26.300450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.321 [2024-12-06 13:41:26.300465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:33.321 [2024-12-06 13:41:26.300477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.321 [2024-12-06 13:41:26.300488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.321 [2024-12-06 13:41:26.300538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.321 [2024-12-06 13:41:26.300551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:33.321 [2024-12-06 13:41:26.300563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.321 [2024-12-06 13:41:26.300574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.321 [2024-12-06 13:41:26.300717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.321 [2024-12-06 13:41:26.300733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:33.321 [2024-12-06 13:41:26.300744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.321 [2024-12-06 13:41:26.300755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.321 [2024-12-06 13:41:26.300797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.321 [2024-12-06 13:41:26.300810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:33.321 [2024-12-06 13:41:26.300821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.321 [2024-12-06 13:41:26.300832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.321 [2024-12-06 13:41:26.300881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.321 [2024-12-06 13:41:26.300898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:33.321 [2024-12-06 13:41:26.300909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.321 [2024-12-06 13:41:26.300920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.321 [2024-12-06 13:41:26.300972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:33.321 [2024-12-06 13:41:26.300986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:33.321 [2024-12-06 13:41:26.300996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:33.321 [2024-12-06 13:41:26.301007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:33.321 [2024-12-06 13:41:26.301163] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 552.806 ms, result 0 00:46:34.701 00:46:34.701 00:46:34.701 13:41:27 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:46:34.701 [2024-12-06 13:41:27.648923] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:46:34.701 [2024-12-06 13:41:27.649122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80592 ] 00:46:34.960 [2024-12-06 13:41:27.831180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:34.960 [2024-12-06 13:41:27.981881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:35.529 [2024-12-06 13:41:28.427181] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:35.529 [2024-12-06 13:41:28.427294] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:35.529 [2024-12-06 13:41:28.593739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.529 [2024-12-06 13:41:28.593804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:35.529 [2024-12-06 13:41:28.593837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:35.529 [2024-12-06 13:41:28.593848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.529 [2024-12-06 13:41:28.593899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.529 [2024-12-06 13:41:28.593916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:35.529 [2024-12-06 13:41:28.593927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:46:35.529 [2024-12-06 13:41:28.593938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.529 [2024-12-06 13:41:28.593961] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:35.529 [2024-12-06 13:41:28.594940] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:35.529 [2024-12-06 13:41:28.594970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.529 [2024-12-06 13:41:28.594982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:35.529 [2024-12-06 13:41:28.594994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:46:35.529 [2024-12-06 13:41:28.595004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.529 [2024-12-06 13:41:28.597498] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:35.529 [2024-12-06 13:41:28.617572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.529 [2024-12-06 13:41:28.617610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:35.529 [2024-12-06 13:41:28.617625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.075 ms 00:46:35.529 [2024-12-06 13:41:28.617636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.529 [2024-12-06 13:41:28.617726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.529 [2024-12-06 13:41:28.617740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:35.529 [2024-12-06 13:41:28.617752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:46:35.529 [2024-12-06 13:41:28.617762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.790 [2024-12-06 13:41:28.630548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.790 [2024-12-06 13:41:28.630579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:35.790 [2024-12-06 13:41:28.630594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.713 ms 00:46:35.790 [2024-12-06 13:41:28.630625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.790 [2024-12-06 13:41:28.630720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.790 [2024-12-06 13:41:28.630735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:35.790 [2024-12-06 13:41:28.630747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:46:35.790 [2024-12-06 13:41:28.630758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.790 [2024-12-06 13:41:28.630817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.790 [2024-12-06 13:41:28.630830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:35.790 [2024-12-06 13:41:28.630841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:46:35.790 [2024-12-06 13:41:28.630852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.790 [2024-12-06 13:41:28.630886] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:35.790 [2024-12-06 13:41:28.636731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.790 [2024-12-06 13:41:28.636762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:35.790 [2024-12-06 13:41:28.636795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.853 ms 00:46:35.790 [2024-12-06 13:41:28.636807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.790 [2024-12-06 13:41:28.636842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.790 [2024-12-06 13:41:28.636855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:35.790 [2024-12-06 13:41:28.636866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:46:35.790 [2024-12-06 13:41:28.636877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.790 [2024-12-06 13:41:28.636916] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:35.790 [2024-12-06 13:41:28.636943] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:46:35.790 [2024-12-06 13:41:28.636981] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:35.790 [2024-12-06 13:41:28.637005] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:46:35.790 [2024-12-06 13:41:28.637099] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:35.790 [2024-12-06 13:41:28.637113] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:35.790 [2024-12-06 13:41:28.637143] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:35.790 [2024-12-06 13:41:28.637157] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:35.790 [2024-12-06 13:41:28.637170] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:35.790 [2024-12-06 13:41:28.637182] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:46:35.790 [2024-12-06 13:41:28.637194] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:35.791 [2024-12-06 13:41:28.637208] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:35.791 [2024-12-06 13:41:28.637219] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:35.791 [2024-12-06 13:41:28.637230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.791 [2024-12-06 13:41:28.637241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:35.791 [2024-12-06 13:41:28.637252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:46:35.791 [2024-12-06 13:41:28.637262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.791 [2024-12-06 13:41:28.637336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.791 [2024-12-06 13:41:28.637347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:35.791 [2024-12-06 13:41:28.637358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:46:35.791 [2024-12-06 13:41:28.637368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.791 [2024-12-06 13:41:28.637475] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:35.791 [2024-12-06 13:41:28.637495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:35.791 [2024-12-06 13:41:28.637506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:35.791 [2024-12-06 13:41:28.637517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:35.791 [2024-12-06 13:41:28.637537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:46:35.791 [2024-12-06 13:41:28.637556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:35.791 [2024-12-06 13:41:28.637567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:35.791 [2024-12-06 13:41:28.637588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:35.791 [2024-12-06 13:41:28.637597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:46:35.791 [2024-12-06 13:41:28.637607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:35.791 [2024-12-06 13:41:28.637630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:35.791 [2024-12-06 13:41:28.637640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:46:35.791 [2024-12-06 13:41:28.637650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:35.791 [2024-12-06 13:41:28.637669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:46:35.791 [2024-12-06 13:41:28.637679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:35.791 [2024-12-06 13:41:28.637698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:35.791 [2024-12-06 13:41:28.637717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:35.791 [2024-12-06 13:41:28.637727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:35.791 [2024-12-06 13:41:28.637746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:35.791 [2024-12-06 13:41:28.637756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:35.791 [2024-12-06 13:41:28.637774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:35.791 [2024-12-06 13:41:28.637783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:35.791 [2024-12-06 13:41:28.637802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:35.791 [2024-12-06 13:41:28.637811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:35.791 [2024-12-06 13:41:28.637830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:35.791 [2024-12-06 13:41:28.637839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:46:35.791 [2024-12-06 13:41:28.637848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:35.791 [2024-12-06 13:41:28.637857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:35.791 [2024-12-06 13:41:28.637867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:46:35.791 [2024-12-06 13:41:28.637876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:35.791 [2024-12-06 13:41:28.637906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:46:35.791 [2024-12-06 13:41:28.637918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637928] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:35.791 [2024-12-06 13:41:28.637939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:35.791 [2024-12-06 13:41:28.637949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:35.791 [2024-12-06 13:41:28.637959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:35.791 [2024-12-06 13:41:28.637970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:35.791 [2024-12-06 13:41:28.637980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:35.791 [2024-12-06 13:41:28.637989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:35.791 [2024-12-06 13:41:28.637999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:35.791 [2024-12-06 13:41:28.638008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:35.791 [2024-12-06 13:41:28.638018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:35.791 [2024-12-06 13:41:28.638029] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:35.791 [2024-12-06 13:41:28.638042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:35.791 [2024-12-06 13:41:28.638060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:46:35.791 [2024-12-06 13:41:28.638071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:46:35.791 [2024-12-06 13:41:28.638082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:46:35.791 [2024-12-06 13:41:28.638093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:46:35.791 [2024-12-06 13:41:28.638103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:46:35.791 [2024-12-06 13:41:28.638114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:46:35.791 [2024-12-06 13:41:28.638125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:46:35.791 [2024-12-06 13:41:28.638135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:46:35.791 [2024-12-06 13:41:28.638146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:46:35.791 [2024-12-06 13:41:28.638156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:46:35.791 [2024-12-06 13:41:28.638166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:46:35.791 [2024-12-06 13:41:28.638177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:46:35.791 [2024-12-06 13:41:28.638188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:46:35.791 [2024-12-06 13:41:28.638198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:46:35.791 [2024-12-06 13:41:28.638208] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:35.791 [2024-12-06 13:41:28.638225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:35.791 [2024-12-06 13:41:28.638236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:35.791 [2024-12-06 13:41:28.638248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:35.791 [2024-12-06 13:41:28.638258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:35.791 [2024-12-06 13:41:28.638270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:35.791 [2024-12-06 13:41:28.638281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.791 [2024-12-06 13:41:28.638293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:35.791 [2024-12-06 13:41:28.638304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:46:35.791 [2024-12-06 13:41:28.638314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.791 [2024-12-06 13:41:28.688768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.791 [2024-12-06 13:41:28.688817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:35.791 [2024-12-06 13:41:28.688833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.398 ms 00:46:35.791 [2024-12-06 13:41:28.688866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.791 [2024-12-06 13:41:28.688958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.791 [2024-12-06 13:41:28.688970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:35.791 [2024-12-06 13:41:28.688983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:46:35.791 [2024-12-06 13:41:28.688994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.791 [2024-12-06 13:41:28.752750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.791 [2024-12-06 13:41:28.752793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:35.792 [2024-12-06 13:41:28.752808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.656 ms 00:46:35.792 [2024-12-06 13:41:28.752820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.792 [2024-12-06 13:41:28.752865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.792 [2024-12-06 13:41:28.752886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:35.792 [2024-12-06 13:41:28.752898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:46:35.792 [2024-12-06 13:41:28.752908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.792 [2024-12-06 13:41:28.753749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.792 [2024-12-06 13:41:28.753776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:35.792 [2024-12-06 13:41:28.753789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:46:35.792 [2024-12-06 13:41:28.753800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.792 [2024-12-06 13:41:28.753943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.792 [2024-12-06 13:41:28.753958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:35.792 [2024-12-06 13:41:28.753977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:46:35.792 [2024-12-06 13:41:28.753988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.792 [2024-12-06 13:41:28.778473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.792 [2024-12-06 13:41:28.778518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:35.792 [2024-12-06 13:41:28.778533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.462 ms 00:46:35.792 [2024-12-06 13:41:28.778545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.792 [2024-12-06 13:41:28.798780] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:46:35.792 [2024-12-06 13:41:28.798816] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:35.792 [2024-12-06 13:41:28.798832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.792 [2024-12-06 13:41:28.798860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:35.792 [2024-12-06 13:41:28.798872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.159 ms 00:46:35.792 [2024-12-06 13:41:28.798882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.792 [2024-12-06 13:41:28.827914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.792 [2024-12-06 13:41:28.827952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:35.792 [2024-12-06 13:41:28.827966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.989 ms 00:46:35.792 [2024-12-06 13:41:28.827994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.792 [2024-12-06 13:41:28.846044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.792 [2024-12-06 13:41:28.846082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:35.792 [2024-12-06 13:41:28.846096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.986 ms 00:46:35.792 [2024-12-06 13:41:28.846106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.792 [2024-12-06 13:41:28.864522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.792 [2024-12-06 13:41:28.864559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:35.792 [2024-12-06 13:41:28.864573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.361 ms 00:46:35.792 [2024-12-06 13:41:28.864583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.792 [2024-12-06 13:41:28.865420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.792 [2024-12-06 13:41:28.865469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:35.792 [2024-12-06 13:41:28.865487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:46:35.792 [2024-12-06 13:41:28.865497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.052 [2024-12-06 13:41:28.965711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.052 [2024-12-06 13:41:28.965784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:36.052 [2024-12-06 13:41:28.965812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.188 ms 00:46:36.052 [2024-12-06 13:41:28.965825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.052 [2024-12-06 13:41:28.977918] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:46:36.052 [2024-12-06 13:41:28.983010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.052 [2024-12-06 13:41:28.983043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:36.052 [2024-12-06 13:41:28.983076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.117 ms 00:46:36.052 [2024-12-06 13:41:28.983088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.052 [2024-12-06 13:41:28.983247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.052 [2024-12-06 13:41:28.983263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:36.052 [2024-12-06 13:41:28.983280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:46:36.052 [2024-12-06 13:41:28.983290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.052 [2024-12-06 13:41:28.983384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.052 [2024-12-06 13:41:28.983398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:36.052 [2024-12-06 13:41:28.983409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:46:36.052 [2024-12-06 13:41:28.983433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.052 [2024-12-06 13:41:28.983460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.052 [2024-12-06 13:41:28.983472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:36.052 [2024-12-06 13:41:28.983483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:36.052 [2024-12-06 13:41:28.983494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.052 [2024-12-06 13:41:28.983539] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:36.052 [2024-12-06 13:41:28.983560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.052 [2024-12-06 13:41:28.983572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:36.052 [2024-12-06 13:41:28.983583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:46:36.052 [2024-12-06 13:41:28.983594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.052 [2024-12-06 13:41:29.021494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.052 [2024-12-06 13:41:29.021532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:36.052 [2024-12-06 13:41:29.021554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.877 ms 00:46:36.052 [2024-12-06 13:41:29.021566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.052 [2024-12-06 13:41:29.021659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:36.052 [2024-12-06 13:41:29.021675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:36.052 [2024-12-06 13:41:29.021687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:46:36.052 [2024-12-06 13:41:29.021697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:36.052 [2024-12-06 13:41:29.023329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 429.025 ms, result 0 00:46:37.427  [2024-12-06T13:41:31.463Z] Copying: 31/1024 [MB] (31 MBps) [2024-12-06T13:41:32.400Z] Copying: 63/1024 [MB] (32 MBps) [2024-12-06T13:41:33.338Z] Copying: 94/1024 [MB] (31 MBps) [2024-12-06T13:41:34.276Z] Copying: 126/1024 [MB] (32 MBps) [2024-12-06T13:41:35.652Z] Copying: 157/1024 [MB] (30 MBps) [2024-12-06T13:41:36.584Z] Copying: 189/1024 [MB] (31 MBps) [2024-12-06T13:41:37.586Z] Copying: 221/1024 [MB] (31 MBps) [2024-12-06T13:41:38.521Z] Copying: 253/1024 [MB] (31 MBps) [2024-12-06T13:41:39.457Z] Copying: 284/1024 [MB] (31 MBps) [2024-12-06T13:41:40.395Z] Copying: 315/1024 [MB] (30 MBps) [2024-12-06T13:41:41.332Z] Copying: 346/1024 [MB] (31 MBps) [2024-12-06T13:41:42.268Z] Copying: 379/1024 [MB] (32 MBps) [2024-12-06T13:41:43.647Z] Copying: 411/1024 [MB] (31 MBps) [2024-12-06T13:41:44.583Z] Copying: 443/1024 [MB] (31 MBps) [2024-12-06T13:41:45.520Z] Copying: 474/1024 [MB] (31 MBps) [2024-12-06T13:41:46.458Z] Copying: 507/1024 [MB] (32 MBps) [2024-12-06T13:41:47.408Z] Copying: 537/1024 [MB] (30 MBps) [2024-12-06T13:41:48.343Z] Copying: 569/1024 [MB] (31 MBps) [2024-12-06T13:41:49.278Z] Copying: 600/1024 [MB] (30 MBps) [2024-12-06T13:41:50.656Z] Copying: 629/1024 [MB] (28 MBps) [2024-12-06T13:41:51.594Z] Copying: 659/1024 [MB] (30 MBps) [2024-12-06T13:41:52.529Z] Copying: 689/1024 [MB] (29 MBps) [2024-12-06T13:41:53.465Z] Copying: 719/1024 [MB] (30 MBps) [2024-12-06T13:41:54.400Z] Copying: 747/1024 [MB] (28 MBps) [2024-12-06T13:41:55.343Z] Copying: 776/1024 [MB] (29 MBps) [2024-12-06T13:41:56.282Z] Copying: 806/1024 [MB] (29 MBps) [2024-12-06T13:41:57.659Z] Copying: 835/1024 [MB] (29 MBps) [2024-12-06T13:41:58.594Z] Copying: 864/1024 [MB] (28 MBps) [2024-12-06T13:41:59.564Z] Copying: 894/1024 [MB] (29 MBps) [2024-12-06T13:42:00.501Z] Copying: 925/1024 [MB] (30 MBps) [2024-12-06T13:42:01.437Z] Copying: 957/1024 [MB] (31 MBps) [2024-12-06T13:42:02.372Z] Copying: 988/1024 [MB] (31 MBps) [2024-12-06T13:42:02.629Z] Copying: 1020/1024 [MB] (31 MBps) [2024-12-06T13:42:02.887Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-12-06 13:42:02.739231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.787 [2024-12-06 13:42:02.739336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:09.787 [2024-12-06 13:42:02.739358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:09.787 [2024-12-06 13:42:02.739372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.787 [2024-12-06 13:42:02.739415] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:09.787 [2024-12-06 13:42:02.745659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.787 [2024-12-06 13:42:02.745733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:09.787 [2024-12-06 13:42:02.745751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.218 ms 00:47:09.787 [2024-12-06 13:42:02.745763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.787 [2024-12-06 13:42:02.746054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.787 [2024-12-06 13:42:02.746091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:09.787 [2024-12-06 13:42:02.746115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:47:09.787 [2024-12-06 13:42:02.746796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.787 [2024-12-06 13:42:02.749789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.787 [2024-12-06 13:42:02.749825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:09.787 [2024-12-06 13:42:02.749855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.935 ms 00:47:09.787 [2024-12-06 13:42:02.749877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.787 [2024-12-06 13:42:02.755967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.787 [2024-12-06 13:42:02.756018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:09.787 [2024-12-06 13:42:02.756032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.056 ms 00:47:09.787 [2024-12-06 13:42:02.756044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.787 [2024-12-06 13:42:02.799437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.787 [2024-12-06 13:42:02.799511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:09.787 [2024-12-06 13:42:02.799530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.274 ms 00:47:09.787 [2024-12-06 13:42:02.799547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.787 [2024-12-06 13:42:02.819648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.787 [2024-12-06 13:42:02.819719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:09.787 [2024-12-06 13:42:02.819736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.032 ms 00:47:09.787 [2024-12-06 13:42:02.819748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.787 [2024-12-06 13:42:02.819916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.787 [2024-12-06 13:42:02.819944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:09.787 [2024-12-06 13:42:02.819963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:47:09.787 [2024-12-06 13:42:02.819979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:09.787 [2024-12-06 13:42:02.857041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:09.787 [2024-12-06 13:42:02.857102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:09.787 [2024-12-06 13:42:02.857119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.028 ms 00:47:09.787 [2024-12-06 13:42:02.857147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.048 [2024-12-06 13:42:02.895137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.048 [2024-12-06 13:42:02.895241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:10.048 [2024-12-06 13:42:02.895259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.940 ms 00:47:10.048 [2024-12-06 13:42:02.895271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.048 [2024-12-06 13:42:02.931092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.048 [2024-12-06 13:42:02.931162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:10.048 [2024-12-06 13:42:02.931179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.768 ms 00:47:10.048 [2024-12-06 13:42:02.931191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.048 [2024-12-06 13:42:02.966938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.048 [2024-12-06 13:42:02.966983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:10.048 [2024-12-06 13:42:02.967014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.656 ms 00:47:10.048 [2024-12-06 13:42:02.967025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.048 [2024-12-06 13:42:02.967066] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:10.048 [2024-12-06 13:42:02.967092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.967992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.968014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.968034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.968053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:10.048 [2024-12-06 13:42:02.968073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.968997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:10.049 [2024-12-06 13:42:02.969027] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:10.049 [2024-12-06 13:42:02.969047] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5eb76c83-dca2-44d9-b449-b6a760e65e2b 00:47:10.049 [2024-12-06 13:42:02.969068] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:47:10.049 [2024-12-06 13:42:02.969084] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:47:10.049 [2024-12-06 13:42:02.969097] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:47:10.049 [2024-12-06 13:42:02.969115] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:47:10.049 [2024-12-06 13:42:02.969156] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:10.049 [2024-12-06 13:42:02.969178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:10.049 [2024-12-06 13:42:02.969198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:10.049 [2024-12-06 13:42:02.969216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:10.049 [2024-12-06 13:42:02.969234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:10.049 [2024-12-06 13:42:02.969248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.049 [2024-12-06 13:42:02.969263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:10.049 [2024-12-06 13:42:02.969284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:47:10.049 [2024-12-06 13:42:02.969310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.049 [2024-12-06 13:42:02.990976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.049 [2024-12-06 13:42:02.991017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:10.049 [2024-12-06 13:42:02.991048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.603 ms 00:47:10.049 [2024-12-06 13:42:02.991059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.049 [2024-12-06 13:42:02.991724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:10.049 [2024-12-06 13:42:02.991753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:10.049 [2024-12-06 13:42:02.991773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:47:10.049 [2024-12-06 13:42:02.991784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.049 [2024-12-06 13:42:03.048756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.049 [2024-12-06 13:42:03.048814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:10.049 [2024-12-06 13:42:03.048831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.049 [2024-12-06 13:42:03.048843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.049 [2024-12-06 13:42:03.048941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.049 [2024-12-06 13:42:03.048958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:10.049 [2024-12-06 13:42:03.048977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.049 [2024-12-06 13:42:03.048988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.049 [2024-12-06 13:42:03.049076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.049 [2024-12-06 13:42:03.049090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:10.049 [2024-12-06 13:42:03.049102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.049 [2024-12-06 13:42:03.049113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.049 [2024-12-06 13:42:03.049132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.049 [2024-12-06 13:42:03.049143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:10.049 [2024-12-06 13:42:03.049157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.049 [2024-12-06 13:42:03.049183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.309 [2024-12-06 13:42:03.189691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.309 [2024-12-06 13:42:03.189763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:10.309 [2024-12-06 13:42:03.189798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.309 [2024-12-06 13:42:03.189810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.309 [2024-12-06 13:42:03.301134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.309 [2024-12-06 13:42:03.301210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:10.309 [2024-12-06 13:42:03.301237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.309 [2024-12-06 13:42:03.301249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.309 [2024-12-06 13:42:03.301374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.309 [2024-12-06 13:42:03.301388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:10.309 [2024-12-06 13:42:03.301427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.309 [2024-12-06 13:42:03.301440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.309 [2024-12-06 13:42:03.301499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.309 [2024-12-06 13:42:03.301520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:10.309 [2024-12-06 13:42:03.301540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.309 [2024-12-06 13:42:03.301556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.309 [2024-12-06 13:42:03.301742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.309 [2024-12-06 13:42:03.301769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:10.309 [2024-12-06 13:42:03.301792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.309 [2024-12-06 13:42:03.301812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.309 [2024-12-06 13:42:03.301882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.309 [2024-12-06 13:42:03.301903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:10.309 [2024-12-06 13:42:03.301924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.309 [2024-12-06 13:42:03.301941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.309 [2024-12-06 13:42:03.302024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.309 [2024-12-06 13:42:03.302056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:10.309 [2024-12-06 13:42:03.302078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.309 [2024-12-06 13:42:03.302097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.309 [2024-12-06 13:42:03.302180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:10.309 [2024-12-06 13:42:03.302200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:10.309 [2024-12-06 13:42:03.302216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:10.309 [2024-12-06 13:42:03.302234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:10.309 [2024-12-06 13:42:03.302481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 563.160 ms, result 0 00:47:11.688 00:47:11.688 00:47:11.688 13:42:04 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:47:13.593 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:47:13.593 13:42:06 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:47:13.593 [2024-12-06 13:42:06.459682] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:47:13.593 [2024-12-06 13:42:06.459878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80980 ] 00:47:13.593 [2024-12-06 13:42:06.657817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:13.852 [2024-12-06 13:42:06.840291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:14.422 [2024-12-06 13:42:07.274485] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:14.422 [2024-12-06 13:42:07.274855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:14.422 [2024-12-06 13:42:07.440800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.422 [2024-12-06 13:42:07.440864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:14.422 [2024-12-06 13:42:07.440881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:47:14.422 [2024-12-06 13:42:07.440892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.422 [2024-12-06 13:42:07.440943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.422 [2024-12-06 13:42:07.440960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:14.422 [2024-12-06 13:42:07.440970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:47:14.422 [2024-12-06 13:42:07.440980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.422 [2024-12-06 13:42:07.441001] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:14.422 [2024-12-06 13:42:07.442045] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:14.422 [2024-12-06 13:42:07.442089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.422 [2024-12-06 13:42:07.442111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:14.422 [2024-12-06 13:42:07.442133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:47:14.422 [2024-12-06 13:42:07.442153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.422 [2024-12-06 13:42:07.444876] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:14.422 [2024-12-06 13:42:07.465476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.422 [2024-12-06 13:42:07.465528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:14.422 [2024-12-06 13:42:07.465543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.601 ms 00:47:14.422 [2024-12-06 13:42:07.465571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.422 [2024-12-06 13:42:07.465654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.422 [2024-12-06 13:42:07.465667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:14.422 [2024-12-06 13:42:07.465678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:47:14.422 [2024-12-06 13:42:07.465689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.422 [2024-12-06 13:42:07.478702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.422 [2024-12-06 13:42:07.478736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:14.422 [2024-12-06 13:42:07.478750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.940 ms 00:47:14.422 [2024-12-06 13:42:07.478767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.422 [2024-12-06 13:42:07.478860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.423 [2024-12-06 13:42:07.478873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:14.423 [2024-12-06 13:42:07.478884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:47:14.423 [2024-12-06 13:42:07.478894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.423 [2024-12-06 13:42:07.478955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.423 [2024-12-06 13:42:07.478967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:14.423 [2024-12-06 13:42:07.478978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:47:14.423 [2024-12-06 13:42:07.478988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.423 [2024-12-06 13:42:07.479020] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:14.423 [2024-12-06 13:42:07.485107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.423 [2024-12-06 13:42:07.485275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:14.423 [2024-12-06 13:42:07.485307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.095 ms 00:47:14.423 [2024-12-06 13:42:07.485319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.423 [2024-12-06 13:42:07.485370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.423 [2024-12-06 13:42:07.485391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:14.423 [2024-12-06 13:42:07.485449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:47:14.423 [2024-12-06 13:42:07.485470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.423 [2024-12-06 13:42:07.485526] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:14.423 [2024-12-06 13:42:07.485559] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:47:14.423 [2024-12-06 13:42:07.485596] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:14.423 [2024-12-06 13:42:07.485621] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:47:14.423 [2024-12-06 13:42:07.485718] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:47:14.423 [2024-12-06 13:42:07.485733] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:14.423 [2024-12-06 13:42:07.485748] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:47:14.423 [2024-12-06 13:42:07.485762] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:14.423 [2024-12-06 13:42:07.485775] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:14.423 [2024-12-06 13:42:07.485788] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:14.423 [2024-12-06 13:42:07.485800] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:14.423 [2024-12-06 13:42:07.485814] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:47:14.423 [2024-12-06 13:42:07.485825] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:47:14.423 [2024-12-06 13:42:07.485837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.423 [2024-12-06 13:42:07.485847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:14.423 [2024-12-06 13:42:07.485859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:47:14.423 [2024-12-06 13:42:07.485869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.423 [2024-12-06 13:42:07.485950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.423 [2024-12-06 13:42:07.485963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:14.423 [2024-12-06 13:42:07.485974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:47:14.423 [2024-12-06 13:42:07.485984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.423 [2024-12-06 13:42:07.486080] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:14.423 [2024-12-06 13:42:07.486094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:14.423 [2024-12-06 13:42:07.486105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:14.423 [2024-12-06 13:42:07.486116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:14.423 [2024-12-06 13:42:07.486137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:14.423 [2024-12-06 13:42:07.486157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:14.423 [2024-12-06 13:42:07.486168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:14.423 [2024-12-06 13:42:07.486187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:14.423 [2024-12-06 13:42:07.486199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:14.423 [2024-12-06 13:42:07.486210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:14.423 [2024-12-06 13:42:07.486232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:14.423 [2024-12-06 13:42:07.486242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:47:14.423 [2024-12-06 13:42:07.486252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:14.423 [2024-12-06 13:42:07.486272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:47:14.423 [2024-12-06 13:42:07.486282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:14.423 [2024-12-06 13:42:07.486302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:14.423 [2024-12-06 13:42:07.486321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:14.423 [2024-12-06 13:42:07.486331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:14.423 [2024-12-06 13:42:07.486350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:14.423 [2024-12-06 13:42:07.486359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:14.423 [2024-12-06 13:42:07.486388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:14.423 [2024-12-06 13:42:07.486398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:14.423 [2024-12-06 13:42:07.486416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:14.423 [2024-12-06 13:42:07.486445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:14.423 [2024-12-06 13:42:07.486464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:14.423 [2024-12-06 13:42:07.486474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:47:14.423 [2024-12-06 13:42:07.486483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:14.423 [2024-12-06 13:42:07.486493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:47:14.423 [2024-12-06 13:42:07.486511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:47:14.423 [2024-12-06 13:42:07.486528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:47:14.423 [2024-12-06 13:42:07.486558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:47:14.423 [2024-12-06 13:42:07.486573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486591] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:14.423 [2024-12-06 13:42:07.486610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:14.423 [2024-12-06 13:42:07.486628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:14.423 [2024-12-06 13:42:07.486646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:14.423 [2024-12-06 13:42:07.486663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:14.423 [2024-12-06 13:42:07.486681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:14.423 [2024-12-06 13:42:07.486714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:14.423 [2024-12-06 13:42:07.486731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:14.423 [2024-12-06 13:42:07.486744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:14.423 [2024-12-06 13:42:07.486754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:14.423 [2024-12-06 13:42:07.486766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:14.423 [2024-12-06 13:42:07.486780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:14.423 [2024-12-06 13:42:07.486808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:14.423 [2024-12-06 13:42:07.486829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:47:14.423 [2024-12-06 13:42:07.486851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:47:14.423 [2024-12-06 13:42:07.486870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:47:14.423 [2024-12-06 13:42:07.486890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:47:14.423 [2024-12-06 13:42:07.486912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:47:14.423 [2024-12-06 13:42:07.486933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:47:14.423 [2024-12-06 13:42:07.486952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:47:14.424 [2024-12-06 13:42:07.486973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:47:14.424 [2024-12-06 13:42:07.486989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:47:14.424 [2024-12-06 13:42:07.487009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:47:14.424 [2024-12-06 13:42:07.487029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:47:14.424 [2024-12-06 13:42:07.487049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:47:14.424 [2024-12-06 13:42:07.487068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:47:14.424 [2024-12-06 13:42:07.487086] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:14.424 [2024-12-06 13:42:07.487106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:14.424 [2024-12-06 13:42:07.487127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:14.424 [2024-12-06 13:42:07.487146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:14.424 [2024-12-06 13:42:07.487168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:14.424 [2024-12-06 13:42:07.487187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:14.424 [2024-12-06 13:42:07.487210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.424 [2024-12-06 13:42:07.487231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:14.424 [2024-12-06 13:42:07.487253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:47:14.424 [2024-12-06 13:42:07.487272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.684 [2024-12-06 13:42:07.539620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.684 [2024-12-06 13:42:07.539673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:14.684 [2024-12-06 13:42:07.539690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.259 ms 00:47:14.684 [2024-12-06 13:42:07.539708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.684 [2024-12-06 13:42:07.539810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.684 [2024-12-06 13:42:07.539822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:14.684 [2024-12-06 13:42:07.539834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:47:14.684 [2024-12-06 13:42:07.539844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.684 [2024-12-06 13:42:07.604881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.684 [2024-12-06 13:42:07.604932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:14.684 [2024-12-06 13:42:07.604948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.926 ms 00:47:14.684 [2024-12-06 13:42:07.604958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.684 [2024-12-06 13:42:07.605017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.684 [2024-12-06 13:42:07.605034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:14.684 [2024-12-06 13:42:07.605045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:47:14.684 [2024-12-06 13:42:07.605055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.684 [2024-12-06 13:42:07.605966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.684 [2024-12-06 13:42:07.605994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:14.684 [2024-12-06 13:42:07.606007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:47:14.684 [2024-12-06 13:42:07.606024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.685 [2024-12-06 13:42:07.606222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.685 [2024-12-06 13:42:07.606240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:14.685 [2024-12-06 13:42:07.606256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:47:14.685 [2024-12-06 13:42:07.606268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.685 [2024-12-06 13:42:07.629942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.685 [2024-12-06 13:42:07.629986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:14.685 [2024-12-06 13:42:07.630000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.647 ms 00:47:14.685 [2024-12-06 13:42:07.630011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.685 [2024-12-06 13:42:07.650356] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:47:14.685 [2024-12-06 13:42:07.650392] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:14.685 [2024-12-06 13:42:07.650418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.685 [2024-12-06 13:42:07.650446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:14.685 [2024-12-06 13:42:07.650457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.276 ms 00:47:14.685 [2024-12-06 13:42:07.650469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.685 [2024-12-06 13:42:07.679476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.685 [2024-12-06 13:42:07.679525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:14.685 [2024-12-06 13:42:07.679548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.957 ms 00:47:14.685 [2024-12-06 13:42:07.679558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.685 [2024-12-06 13:42:07.697410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.685 [2024-12-06 13:42:07.697445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:14.685 [2024-12-06 13:42:07.697458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.773 ms 00:47:14.685 [2024-12-06 13:42:07.697484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.685 [2024-12-06 13:42:07.715208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.685 [2024-12-06 13:42:07.715374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:14.685 [2024-12-06 13:42:07.715411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.686 ms 00:47:14.685 [2024-12-06 13:42:07.715423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.685 [2024-12-06 13:42:07.716264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.685 [2024-12-06 13:42:07.716302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:14.685 [2024-12-06 13:42:07.716328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:47:14.685 [2024-12-06 13:42:07.716347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.945 [2024-12-06 13:42:07.813067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.945 [2024-12-06 13:42:07.813150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:14.945 [2024-12-06 13:42:07.813176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.672 ms 00:47:14.945 [2024-12-06 13:42:07.813189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.945 [2024-12-06 13:42:07.824311] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:14.945 [2024-12-06 13:42:07.828989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.945 [2024-12-06 13:42:07.829023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:14.945 [2024-12-06 13:42:07.829040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.743 ms 00:47:14.945 [2024-12-06 13:42:07.829053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.945 [2024-12-06 13:42:07.829188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.945 [2024-12-06 13:42:07.829204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:14.945 [2024-12-06 13:42:07.829223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:47:14.945 [2024-12-06 13:42:07.829235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.945 [2024-12-06 13:42:07.829331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.945 [2024-12-06 13:42:07.829346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:14.945 [2024-12-06 13:42:07.829359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:47:14.945 [2024-12-06 13:42:07.829372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.945 [2024-12-06 13:42:07.829434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.945 [2024-12-06 13:42:07.829450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:14.946 [2024-12-06 13:42:07.829463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:47:14.946 [2024-12-06 13:42:07.829477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.946 [2024-12-06 13:42:07.829528] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:14.946 [2024-12-06 13:42:07.829544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.946 [2024-12-06 13:42:07.829557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:14.946 [2024-12-06 13:42:07.829571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:47:14.946 [2024-12-06 13:42:07.829584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.946 [2024-12-06 13:42:07.867616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.946 [2024-12-06 13:42:07.867653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:14.946 [2024-12-06 13:42:07.867690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.004 ms 00:47:14.946 [2024-12-06 13:42:07.867701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.946 [2024-12-06 13:42:07.867778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:14.946 [2024-12-06 13:42:07.867791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:14.946 [2024-12-06 13:42:07.867803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:47:14.946 [2024-12-06 13:42:07.867813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:14.946 [2024-12-06 13:42:07.869552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 428.100 ms, result 0 00:47:15.883  [2024-12-06T13:42:09.917Z] Copying: 28/1024 [MB] (28 MBps) [2024-12-06T13:42:11.294Z] Copying: 56/1024 [MB] (28 MBps) [2024-12-06T13:42:12.230Z] Copying: 87/1024 [MB] (30 MBps) [2024-12-06T13:42:13.167Z] Copying: 118/1024 [MB] (30 MBps) [2024-12-06T13:42:14.103Z] Copying: 148/1024 [MB] (30 MBps) [2024-12-06T13:42:15.041Z] Copying: 179/1024 [MB] (30 MBps) [2024-12-06T13:42:15.975Z] Copying: 210/1024 [MB] (30 MBps) [2024-12-06T13:42:16.953Z] Copying: 241/1024 [MB] (31 MBps) [2024-12-06T13:42:17.889Z] Copying: 273/1024 [MB] (31 MBps) [2024-12-06T13:42:19.267Z] Copying: 304/1024 [MB] (31 MBps) [2024-12-06T13:42:20.204Z] Copying: 335/1024 [MB] (31 MBps) [2024-12-06T13:42:21.139Z] Copying: 367/1024 [MB] (31 MBps) [2024-12-06T13:42:22.075Z] Copying: 398/1024 [MB] (31 MBps) [2024-12-06T13:42:23.009Z] Copying: 428/1024 [MB] (30 MBps) [2024-12-06T13:42:23.945Z] Copying: 459/1024 [MB] (31 MBps) [2024-12-06T13:42:25.321Z] Copying: 491/1024 [MB] (31 MBps) [2024-12-06T13:42:25.887Z] Copying: 522/1024 [MB] (30 MBps) [2024-12-06T13:42:27.265Z] Copying: 552/1024 [MB] (30 MBps) [2024-12-06T13:42:28.204Z] Copying: 583/1024 [MB] (30 MBps) [2024-12-06T13:42:29.141Z] Copying: 613/1024 [MB] (30 MBps) [2024-12-06T13:42:30.079Z] Copying: 644/1024 [MB] (30 MBps) [2024-12-06T13:42:31.015Z] Copying: 674/1024 [MB] (30 MBps) [2024-12-06T13:42:31.949Z] Copying: 705/1024 [MB] (30 MBps) [2024-12-06T13:42:32.884Z] Copying: 736/1024 [MB] (31 MBps) [2024-12-06T13:42:34.280Z] Copying: 768/1024 [MB] (31 MBps) [2024-12-06T13:42:35.217Z] Copying: 799/1024 [MB] (31 MBps) [2024-12-06T13:42:36.151Z] Copying: 830/1024 [MB] (31 MBps) [2024-12-06T13:42:37.087Z] Copying: 861/1024 [MB] (30 MBps) [2024-12-06T13:42:38.027Z] Copying: 891/1024 [MB] (30 MBps) [2024-12-06T13:42:38.963Z] Copying: 922/1024 [MB] (30 MBps) [2024-12-06T13:42:39.899Z] Copying: 953/1024 [MB] (31 MBps) [2024-12-06T13:42:41.273Z] Copying: 984/1024 [MB] (30 MBps) [2024-12-06T13:42:42.208Z] Copying: 1014/1024 [MB] (29 MBps) [2024-12-06T13:42:42.208Z] Copying: 1048384/1048576 [kB] (10048 kBps) [2024-12-06T13:42:42.208Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-12-06 13:42:42.067143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.108 [2024-12-06 13:42:42.067415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:49.108 [2024-12-06 13:42:42.067464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:49.108 [2024-12-06 13:42:42.067477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.109 [2024-12-06 13:42:42.070136] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:49.109 [2024-12-06 13:42:42.076986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.109 [2024-12-06 13:42:42.077136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:49.109 [2024-12-06 13:42:42.077281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.810 ms 00:47:49.109 [2024-12-06 13:42:42.077301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.109 [2024-12-06 13:42:42.087795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.109 [2024-12-06 13:42:42.087942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:49.109 [2024-12-06 13:42:42.088028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.842 ms 00:47:49.109 [2024-12-06 13:42:42.088079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.109 [2024-12-06 13:42:42.109861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.109 [2024-12-06 13:42:42.110043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:49.109 [2024-12-06 13:42:42.110155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.736 ms 00:47:49.109 [2024-12-06 13:42:42.110199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.109 [2024-12-06 13:42:42.115352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.109 [2024-12-06 13:42:42.115518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:49.109 [2024-12-06 13:42:42.115607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.015 ms 00:47:49.109 [2024-12-06 13:42:42.115653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.109 [2024-12-06 13:42:42.154194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.109 [2024-12-06 13:42:42.154349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:49.109 [2024-12-06 13:42:42.154475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.455 ms 00:47:49.109 [2024-12-06 13:42:42.154514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.109 [2024-12-06 13:42:42.175979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.109 [2024-12-06 13:42:42.176123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:49.109 [2024-12-06 13:42:42.176146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.405 ms 00:47:49.109 [2024-12-06 13:42:42.176158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.368 [2024-12-06 13:42:42.281784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.368 [2024-12-06 13:42:42.281836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:49.368 [2024-12-06 13:42:42.281854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.581 ms 00:47:49.368 [2024-12-06 13:42:42.281866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.368 [2024-12-06 13:42:42.320380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.368 [2024-12-06 13:42:42.320554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:49.368 [2024-12-06 13:42:42.320578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.496 ms 00:47:49.368 [2024-12-06 13:42:42.320590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.368 [2024-12-06 13:42:42.357560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.368 [2024-12-06 13:42:42.357596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:49.368 [2024-12-06 13:42:42.357610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.904 ms 00:47:49.368 [2024-12-06 13:42:42.357637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.368 [2024-12-06 13:42:42.393283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.368 [2024-12-06 13:42:42.393458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:49.368 [2024-12-06 13:42:42.393480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.607 ms 00:47:49.368 [2024-12-06 13:42:42.393491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.368 [2024-12-06 13:42:42.429033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.368 [2024-12-06 13:42:42.429158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:49.368 [2024-12-06 13:42:42.429194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.407 ms 00:47:49.368 [2024-12-06 13:42:42.429205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.368 [2024-12-06 13:42:42.429297] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:49.368 [2024-12-06 13:42:42.429318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118272 / 261120 wr_cnt: 1 state: open 00:47:49.368 [2024-12-06 13:42:42.429332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:49.368 [2024-12-06 13:42:42.429799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.429997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:49.369 [2024-12-06 13:42:42.430465] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:49.369 [2024-12-06 13:42:42.430476] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5eb76c83-dca2-44d9-b449-b6a760e65e2b 00:47:49.369 [2024-12-06 13:42:42.430487] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118272 00:47:49.369 [2024-12-06 13:42:42.430498] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119232 00:47:49.369 [2024-12-06 13:42:42.430509] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118272 00:47:49.369 [2024-12-06 13:42:42.430520] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:47:49.369 [2024-12-06 13:42:42.430548] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:49.369 [2024-12-06 13:42:42.430559] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:49.369 [2024-12-06 13:42:42.430580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:49.369 [2024-12-06 13:42:42.430590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:49.369 [2024-12-06 13:42:42.430599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:49.369 [2024-12-06 13:42:42.430609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.369 [2024-12-06 13:42:42.430631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:49.369 [2024-12-06 13:42:42.430641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:47:49.369 [2024-12-06 13:42:42.430651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.369 [2024-12-06 13:42:42.451257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.369 [2024-12-06 13:42:42.451288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:49.369 [2024-12-06 13:42:42.451307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.572 ms 00:47:49.369 [2024-12-06 13:42:42.451318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.369 [2024-12-06 13:42:42.451966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:49.369 [2024-12-06 13:42:42.451983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:49.369 [2024-12-06 13:42:42.451995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:47:49.369 [2024-12-06 13:42:42.452006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.627 [2024-12-06 13:42:42.507181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.628 [2024-12-06 13:42:42.507229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:49.628 [2024-12-06 13:42:42.507244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.628 [2024-12-06 13:42:42.507255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.628 [2024-12-06 13:42:42.507326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.628 [2024-12-06 13:42:42.507339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:49.628 [2024-12-06 13:42:42.507349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.628 [2024-12-06 13:42:42.507360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.628 [2024-12-06 13:42:42.507464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.628 [2024-12-06 13:42:42.507484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:49.628 [2024-12-06 13:42:42.507496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.628 [2024-12-06 13:42:42.507506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.628 [2024-12-06 13:42:42.507526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.628 [2024-12-06 13:42:42.507545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:49.628 [2024-12-06 13:42:42.507572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.628 [2024-12-06 13:42:42.507582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.628 [2024-12-06 13:42:42.641080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.628 [2024-12-06 13:42:42.641165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:49.628 [2024-12-06 13:42:42.641182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.628 [2024-12-06 13:42:42.641192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.887 [2024-12-06 13:42:42.748273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.887 [2024-12-06 13:42:42.748585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:49.887 [2024-12-06 13:42:42.748609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.887 [2024-12-06 13:42:42.748622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.887 [2024-12-06 13:42:42.748756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.887 [2024-12-06 13:42:42.748770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:49.887 [2024-12-06 13:42:42.748782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.887 [2024-12-06 13:42:42.748799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.887 [2024-12-06 13:42:42.748857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.887 [2024-12-06 13:42:42.748871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:49.887 [2024-12-06 13:42:42.748882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.887 [2024-12-06 13:42:42.748893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.887 [2024-12-06 13:42:42.749026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.887 [2024-12-06 13:42:42.749040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:49.887 [2024-12-06 13:42:42.749052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.887 [2024-12-06 13:42:42.749068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.887 [2024-12-06 13:42:42.749108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.887 [2024-12-06 13:42:42.749121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:49.887 [2024-12-06 13:42:42.749132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.887 [2024-12-06 13:42:42.749143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.887 [2024-12-06 13:42:42.749191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.887 [2024-12-06 13:42:42.749203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:49.887 [2024-12-06 13:42:42.749214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.887 [2024-12-06 13:42:42.749225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.887 [2024-12-06 13:42:42.749283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:49.887 [2024-12-06 13:42:42.749297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:49.887 [2024-12-06 13:42:42.749308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:49.887 [2024-12-06 13:42:42.749318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:49.887 [2024-12-06 13:42:42.749489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 683.982 ms, result 0 00:47:51.791 00:47:51.791 00:47:51.791 13:42:44 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:47:51.791 [2024-12-06 13:42:44.607820] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:47:51.791 [2024-12-06 13:42:44.608031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81361 ] 00:47:51.791 [2024-12-06 13:42:44.803000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:52.050 [2024-12-06 13:42:44.947092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:52.308 [2024-12-06 13:42:45.382620] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:52.308 [2024-12-06 13:42:45.382711] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:52.567 [2024-12-06 13:42:45.549280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.549340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:52.567 [2024-12-06 13:42:45.549358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:47:52.567 [2024-12-06 13:42:45.549369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.549432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.549465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:52.567 [2024-12-06 13:42:45.549477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:47:52.567 [2024-12-06 13:42:45.549487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.549510] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:52.567 [2024-12-06 13:42:45.550524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:52.567 [2024-12-06 13:42:45.550551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.550563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:52.567 [2024-12-06 13:42:45.550575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:47:52.567 [2024-12-06 13:42:45.550585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.553116] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:52.567 [2024-12-06 13:42:45.573673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.573831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:52.567 [2024-12-06 13:42:45.573854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.557 ms 00:47:52.567 [2024-12-06 13:42:45.573866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.573939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.573953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:52.567 [2024-12-06 13:42:45.573965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:47:52.567 [2024-12-06 13:42:45.573975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.586697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.586727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:52.567 [2024-12-06 13:42:45.586741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.645 ms 00:47:52.567 [2024-12-06 13:42:45.586756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.586847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.586860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:52.567 [2024-12-06 13:42:45.586872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:47:52.567 [2024-12-06 13:42:45.586882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.586938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.586951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:52.567 [2024-12-06 13:42:45.586962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:47:52.567 [2024-12-06 13:42:45.586972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.587002] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:52.567 [2024-12-06 13:42:45.592852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.592882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:52.567 [2024-12-06 13:42:45.592915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.857 ms 00:47:52.567 [2024-12-06 13:42:45.592926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.592964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.592975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:52.567 [2024-12-06 13:42:45.592986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:47:52.567 [2024-12-06 13:42:45.592997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.593033] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:52.567 [2024-12-06 13:42:45.593064] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:47:52.567 [2024-12-06 13:42:45.593101] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:52.567 [2024-12-06 13:42:45.593125] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:47:52.567 [2024-12-06 13:42:45.593220] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:47:52.567 [2024-12-06 13:42:45.593235] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:52.567 [2024-12-06 13:42:45.593249] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:47:52.567 [2024-12-06 13:42:45.593262] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:52.567 [2024-12-06 13:42:45.593274] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:52.567 [2024-12-06 13:42:45.593286] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:52.567 [2024-12-06 13:42:45.593298] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:52.567 [2024-12-06 13:42:45.593312] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:47:52.567 [2024-12-06 13:42:45.593323] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:47:52.567 [2024-12-06 13:42:45.593333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.593344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:52.567 [2024-12-06 13:42:45.593355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:47:52.567 [2024-12-06 13:42:45.593365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.593453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.567 [2024-12-06 13:42:45.593466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:52.567 [2024-12-06 13:42:45.593477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:47:52.567 [2024-12-06 13:42:45.593487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.567 [2024-12-06 13:42:45.593582] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:52.567 [2024-12-06 13:42:45.593596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:52.567 [2024-12-06 13:42:45.593607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:52.567 [2024-12-06 13:42:45.593618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.567 [2024-12-06 13:42:45.593629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:52.567 [2024-12-06 13:42:45.593639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:52.567 [2024-12-06 13:42:45.593649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:52.567 [2024-12-06 13:42:45.593659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:52.567 [2024-12-06 13:42:45.593669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:52.567 [2024-12-06 13:42:45.593678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:52.567 [2024-12-06 13:42:45.593690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:52.567 [2024-12-06 13:42:45.593700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:52.567 [2024-12-06 13:42:45.593709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:52.567 [2024-12-06 13:42:45.593731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:52.567 [2024-12-06 13:42:45.593741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:47:52.567 [2024-12-06 13:42:45.593751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.567 [2024-12-06 13:42:45.593760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:52.567 [2024-12-06 13:42:45.593770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:47:52.567 [2024-12-06 13:42:45.593779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.568 [2024-12-06 13:42:45.593788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:52.568 [2024-12-06 13:42:45.593798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:52.568 [2024-12-06 13:42:45.593807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:52.568 [2024-12-06 13:42:45.593817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:52.568 [2024-12-06 13:42:45.593827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:52.568 [2024-12-06 13:42:45.593836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:52.568 [2024-12-06 13:42:45.593846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:52.568 [2024-12-06 13:42:45.593855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:52.568 [2024-12-06 13:42:45.593864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:52.568 [2024-12-06 13:42:45.593874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:52.568 [2024-12-06 13:42:45.593883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:47:52.568 [2024-12-06 13:42:45.593892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:52.568 [2024-12-06 13:42:45.593901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:52.568 [2024-12-06 13:42:45.593910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:47:52.568 [2024-12-06 13:42:45.593919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:52.568 [2024-12-06 13:42:45.593928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:52.568 [2024-12-06 13:42:45.593937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:47:52.568 [2024-12-06 13:42:45.593946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:52.568 [2024-12-06 13:42:45.593955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:47:52.568 [2024-12-06 13:42:45.593964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:47:52.568 [2024-12-06 13:42:45.593973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.568 [2024-12-06 13:42:45.593981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:47:52.568 [2024-12-06 13:42:45.593990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:47:52.568 [2024-12-06 13:42:45.594003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.568 [2024-12-06 13:42:45.594013] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:52.568 [2024-12-06 13:42:45.594024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:52.568 [2024-12-06 13:42:45.594034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:52.568 [2024-12-06 13:42:45.594044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:52.568 [2024-12-06 13:42:45.594055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:52.568 [2024-12-06 13:42:45.594065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:52.568 [2024-12-06 13:42:45.594074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:52.568 [2024-12-06 13:42:45.594084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:52.568 [2024-12-06 13:42:45.594093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:52.568 [2024-12-06 13:42:45.594103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:52.568 [2024-12-06 13:42:45.594113] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:52.568 [2024-12-06 13:42:45.594126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:52.568 [2024-12-06 13:42:45.594143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:52.568 [2024-12-06 13:42:45.594154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:47:52.568 [2024-12-06 13:42:45.594165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:47:52.568 [2024-12-06 13:42:45.594176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:47:52.568 [2024-12-06 13:42:45.594187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:47:52.568 [2024-12-06 13:42:45.594199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:47:52.568 [2024-12-06 13:42:45.594210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:47:52.568 [2024-12-06 13:42:45.594220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:47:52.568 [2024-12-06 13:42:45.594231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:47:52.568 [2024-12-06 13:42:45.594241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:47:52.568 [2024-12-06 13:42:45.594251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:47:52.568 [2024-12-06 13:42:45.594262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:47:52.568 [2024-12-06 13:42:45.594272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:47:52.568 [2024-12-06 13:42:45.594284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:47:52.568 [2024-12-06 13:42:45.594294] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:52.568 [2024-12-06 13:42:45.594306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:52.568 [2024-12-06 13:42:45.594327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:52.568 [2024-12-06 13:42:45.594339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:52.568 [2024-12-06 13:42:45.594350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:52.568 [2024-12-06 13:42:45.594362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:52.568 [2024-12-06 13:42:45.594373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.568 [2024-12-06 13:42:45.594385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:52.568 [2024-12-06 13:42:45.594411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:47:52.568 [2024-12-06 13:42:45.594423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.568 [2024-12-06 13:42:45.644795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.568 [2024-12-06 13:42:45.644839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:52.568 [2024-12-06 13:42:45.644854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.316 ms 00:47:52.568 [2024-12-06 13:42:45.644887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.568 [2024-12-06 13:42:45.644978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.568 [2024-12-06 13:42:45.644991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:52.568 [2024-12-06 13:42:45.645003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:47:52.568 [2024-12-06 13:42:45.645023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.827 [2024-12-06 13:42:45.709700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.827 [2024-12-06 13:42:45.709744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:52.827 [2024-12-06 13:42:45.709760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.575 ms 00:47:52.827 [2024-12-06 13:42:45.709770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.827 [2024-12-06 13:42:45.709814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.827 [2024-12-06 13:42:45.709830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:52.827 [2024-12-06 13:42:45.709842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:47:52.827 [2024-12-06 13:42:45.709852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.827 [2024-12-06 13:42:45.710733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.827 [2024-12-06 13:42:45.710755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:52.827 [2024-12-06 13:42:45.710767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:47:52.827 [2024-12-06 13:42:45.710778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.827 [2024-12-06 13:42:45.710919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.827 [2024-12-06 13:42:45.710950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:52.827 [2024-12-06 13:42:45.710965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:47:52.827 [2024-12-06 13:42:45.710976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.827 [2024-12-06 13:42:45.734352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.827 [2024-12-06 13:42:45.734406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:52.827 [2024-12-06 13:42:45.734420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.353 ms 00:47:52.827 [2024-12-06 13:42:45.734430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.827 [2024-12-06 13:42:45.754669] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:47:52.827 [2024-12-06 13:42:45.754707] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:52.827 [2024-12-06 13:42:45.754722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.827 [2024-12-06 13:42:45.754734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:52.827 [2024-12-06 13:42:45.754745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.151 ms 00:47:52.827 [2024-12-06 13:42:45.754755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.827 [2024-12-06 13:42:45.783864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.827 [2024-12-06 13:42:45.783901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:52.827 [2024-12-06 13:42:45.783916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.067 ms 00:47:52.827 [2024-12-06 13:42:45.783926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.827 [2024-12-06 13:42:45.801665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.827 [2024-12-06 13:42:45.801833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:52.827 [2024-12-06 13:42:45.801955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.691 ms 00:47:52.827 [2024-12-06 13:42:45.801995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.827 [2024-12-06 13:42:45.820040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.827 [2024-12-06 13:42:45.820173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:52.828 [2024-12-06 13:42:45.820270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.987 ms 00:47:52.828 [2024-12-06 13:42:45.820285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.828 [2024-12-06 13:42:45.821144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.828 [2024-12-06 13:42:45.821175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:52.828 [2024-12-06 13:42:45.821193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:47:52.828 [2024-12-06 13:42:45.821205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:52.828 [2024-12-06 13:42:45.918047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:52.828 [2024-12-06 13:42:45.918131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:52.828 [2024-12-06 13:42:45.918157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.814 ms 00:47:52.828 [2024-12-06 13:42:45.918168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.087 [2024-12-06 13:42:45.929762] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:53.087 [2024-12-06 13:42:45.934584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.087 [2024-12-06 13:42:45.934617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:53.087 [2024-12-06 13:42:45.934634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.357 ms 00:47:53.087 [2024-12-06 13:42:45.934645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.087 [2024-12-06 13:42:45.934786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.087 [2024-12-06 13:42:45.934801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:53.087 [2024-12-06 13:42:45.934818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:47:53.087 [2024-12-06 13:42:45.934830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.087 [2024-12-06 13:42:45.937171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.087 [2024-12-06 13:42:45.937210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:53.087 [2024-12-06 13:42:45.937236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.296 ms 00:47:53.087 [2024-12-06 13:42:45.937247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.087 [2024-12-06 13:42:45.937281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.087 [2024-12-06 13:42:45.937293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:53.087 [2024-12-06 13:42:45.937304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:47:53.087 [2024-12-06 13:42:45.937315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.087 [2024-12-06 13:42:45.937368] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:53.087 [2024-12-06 13:42:45.937382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.087 [2024-12-06 13:42:45.937393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:53.087 [2024-12-06 13:42:45.937421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:47:53.087 [2024-12-06 13:42:45.937431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.087 [2024-12-06 13:42:45.976766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.087 [2024-12-06 13:42:45.976810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:53.087 [2024-12-06 13:42:45.976842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.311 ms 00:47:53.087 [2024-12-06 13:42:45.976870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.087 [2024-12-06 13:42:45.976959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:53.087 [2024-12-06 13:42:45.976973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:53.087 [2024-12-06 13:42:45.976986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:47:53.087 [2024-12-06 13:42:45.976997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:53.087 [2024-12-06 13:42:45.978603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 428.728 ms, result 0 00:47:54.492  [2024-12-06T13:42:48.529Z] Copying: 28/1024 [MB] (28 MBps) [2024-12-06T13:42:49.466Z] Copying: 59/1024 [MB] (31 MBps) [2024-12-06T13:42:50.403Z] Copying: 91/1024 [MB] (31 MBps) [2024-12-06T13:42:51.341Z] Copying: 123/1024 [MB] (32 MBps) [2024-12-06T13:42:52.278Z] Copying: 155/1024 [MB] (31 MBps) [2024-12-06T13:42:53.656Z] Copying: 187/1024 [MB] (31 MBps) [2024-12-06T13:42:54.223Z] Copying: 218/1024 [MB] (31 MBps) [2024-12-06T13:42:55.620Z] Copying: 248/1024 [MB] (30 MBps) [2024-12-06T13:42:56.556Z] Copying: 279/1024 [MB] (31 MBps) [2024-12-06T13:42:57.494Z] Copying: 311/1024 [MB] (32 MBps) [2024-12-06T13:42:58.428Z] Copying: 343/1024 [MB] (31 MBps) [2024-12-06T13:42:59.360Z] Copying: 374/1024 [MB] (30 MBps) [2024-12-06T13:43:00.295Z] Copying: 405/1024 [MB] (30 MBps) [2024-12-06T13:43:01.246Z] Copying: 436/1024 [MB] (31 MBps) [2024-12-06T13:43:02.653Z] Copying: 466/1024 [MB] (29 MBps) [2024-12-06T13:43:03.586Z] Copying: 496/1024 [MB] (30 MBps) [2024-12-06T13:43:04.522Z] Copying: 527/1024 [MB] (31 MBps) [2024-12-06T13:43:05.458Z] Copying: 558/1024 [MB] (31 MBps) [2024-12-06T13:43:06.392Z] Copying: 589/1024 [MB] (30 MBps) [2024-12-06T13:43:07.329Z] Copying: 620/1024 [MB] (30 MBps) [2024-12-06T13:43:08.265Z] Copying: 650/1024 [MB] (30 MBps) [2024-12-06T13:43:09.642Z] Copying: 680/1024 [MB] (30 MBps) [2024-12-06T13:43:10.578Z] Copying: 711/1024 [MB] (30 MBps) [2024-12-06T13:43:11.515Z] Copying: 740/1024 [MB] (28 MBps) [2024-12-06T13:43:12.453Z] Copying: 770/1024 [MB] (29 MBps) [2024-12-06T13:43:13.393Z] Copying: 800/1024 [MB] (30 MBps) [2024-12-06T13:43:14.326Z] Copying: 831/1024 [MB] (30 MBps) [2024-12-06T13:43:15.260Z] Copying: 862/1024 [MB] (30 MBps) [2024-12-06T13:43:16.637Z] Copying: 893/1024 [MB] (30 MBps) [2024-12-06T13:43:17.221Z] Copying: 923/1024 [MB] (30 MBps) [2024-12-06T13:43:18.599Z] Copying: 954/1024 [MB] (31 MBps) [2024-12-06T13:43:19.533Z] Copying: 984/1024 [MB] (29 MBps) [2024-12-06T13:43:19.533Z] Copying: 1014/1024 [MB] (30 MBps) [2024-12-06T13:43:20.099Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-12-06 13:43:19.936930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.000 [2024-12-06 13:43:19.937033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:48:27.000 [2024-12-06 13:43:19.937080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:48:27.000 [2024-12-06 13:43:19.937093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.000 [2024-12-06 13:43:19.937126] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:48:27.000 [2024-12-06 13:43:19.944815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.000 [2024-12-06 13:43:19.944918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:48:27.000 [2024-12-06 13:43:19.944950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.651 ms 00:48:27.000 [2024-12-06 13:43:19.944972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.000 [2024-12-06 13:43:19.945364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.000 [2024-12-06 13:43:19.945415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:48:27.000 [2024-12-06 13:43:19.945439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:48:27.000 [2024-12-06 13:43:19.945475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.000 [2024-12-06 13:43:19.950538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.000 [2024-12-06 13:43:19.950586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:48:27.000 [2024-12-06 13:43:19.950603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.031 ms 00:48:27.000 [2024-12-06 13:43:19.950617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.000 [2024-12-06 13:43:19.956506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.000 [2024-12-06 13:43:19.956566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:48:27.000 [2024-12-06 13:43:19.956581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.845 ms 00:48:27.000 [2024-12-06 13:43:19.956604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.000 [2024-12-06 13:43:19.997608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.000 [2024-12-06 13:43:19.997651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:48:27.000 [2024-12-06 13:43:19.997667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.949 ms 00:48:27.000 [2024-12-06 13:43:19.997678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.000 [2024-12-06 13:43:20.018012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.000 [2024-12-06 13:43:20.018053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:48:27.000 [2024-12-06 13:43:20.018069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.290 ms 00:48:27.000 [2024-12-06 13:43:20.018081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.260 [2024-12-06 13:43:20.130952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.260 [2024-12-06 13:43:20.131033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:48:27.260 [2024-12-06 13:43:20.131052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.823 ms 00:48:27.260 [2024-12-06 13:43:20.131064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.260 [2024-12-06 13:43:20.168501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.260 [2024-12-06 13:43:20.168550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:48:27.260 [2024-12-06 13:43:20.168566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.417 ms 00:48:27.260 [2024-12-06 13:43:20.168593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.260 [2024-12-06 13:43:20.204878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.260 [2024-12-06 13:43:20.204915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:48:27.260 [2024-12-06 13:43:20.204929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.245 ms 00:48:27.260 [2024-12-06 13:43:20.204956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.260 [2024-12-06 13:43:20.242735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.260 [2024-12-06 13:43:20.242939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:48:27.260 [2024-12-06 13:43:20.242962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.739 ms 00:48:27.260 [2024-12-06 13:43:20.242974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.260 [2024-12-06 13:43:20.279144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.260 [2024-12-06 13:43:20.279181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:48:27.260 [2024-12-06 13:43:20.279194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.059 ms 00:48:27.260 [2024-12-06 13:43:20.279220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.260 [2024-12-06 13:43:20.279261] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:48:27.260 [2024-12-06 13:43:20.279280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:48:27.260 [2024-12-06 13:43:20.279296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:48:27.260 [2024-12-06 13:43:20.279604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.279999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:48:27.261 [2024-12-06 13:43:20.280457] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:48:27.261 [2024-12-06 13:43:20.280468] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5eb76c83-dca2-44d9-b449-b6a760e65e2b 00:48:27.261 [2024-12-06 13:43:20.280480] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:48:27.261 [2024-12-06 13:43:20.280491] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13760 00:48:27.261 [2024-12-06 13:43:20.280501] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12800 00:48:27.261 [2024-12-06 13:43:20.280513] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0750 00:48:27.261 [2024-12-06 13:43:20.280530] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:48:27.261 [2024-12-06 13:43:20.280555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:48:27.261 [2024-12-06 13:43:20.280566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:48:27.261 [2024-12-06 13:43:20.280575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:48:27.261 [2024-12-06 13:43:20.280585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:48:27.261 [2024-12-06 13:43:20.280595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.261 [2024-12-06 13:43:20.280607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:48:27.261 [2024-12-06 13:43:20.280618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:48:27.261 [2024-12-06 13:43:20.280629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.261 [2024-12-06 13:43:20.301630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.261 [2024-12-06 13:43:20.301664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:48:27.261 [2024-12-06 13:43:20.301699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.963 ms 00:48:27.261 [2024-12-06 13:43:20.301710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.262 [2024-12-06 13:43:20.302261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:27.262 [2024-12-06 13:43:20.302276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:48:27.262 [2024-12-06 13:43:20.302288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:48:27.262 [2024-12-06 13:43:20.302298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.521 [2024-12-06 13:43:20.358167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.521 [2024-12-06 13:43:20.358215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:27.521 [2024-12-06 13:43:20.358231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.521 [2024-12-06 13:43:20.358242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.521 [2024-12-06 13:43:20.358314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.521 [2024-12-06 13:43:20.358327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:27.521 [2024-12-06 13:43:20.358338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.521 [2024-12-06 13:43:20.358349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.521 [2024-12-06 13:43:20.358475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.521 [2024-12-06 13:43:20.358491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:27.521 [2024-12-06 13:43:20.358508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.522 [2024-12-06 13:43:20.358535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.522 [2024-12-06 13:43:20.358555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.522 [2024-12-06 13:43:20.358568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:27.522 [2024-12-06 13:43:20.358579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.522 [2024-12-06 13:43:20.358590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.522 [2024-12-06 13:43:20.496895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.522 [2024-12-06 13:43:20.496963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:27.522 [2024-12-06 13:43:20.496979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.522 [2024-12-06 13:43:20.496989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.522 [2024-12-06 13:43:20.599756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.522 [2024-12-06 13:43:20.599830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:27.522 [2024-12-06 13:43:20.599846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.522 [2024-12-06 13:43:20.599857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.522 [2024-12-06 13:43:20.599986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.522 [2024-12-06 13:43:20.599998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:27.522 [2024-12-06 13:43:20.600010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.522 [2024-12-06 13:43:20.600029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.522 [2024-12-06 13:43:20.600084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.522 [2024-12-06 13:43:20.600097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:27.522 [2024-12-06 13:43:20.600108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.522 [2024-12-06 13:43:20.600118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.522 [2024-12-06 13:43:20.600251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.522 [2024-12-06 13:43:20.600265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:27.522 [2024-12-06 13:43:20.600285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.522 [2024-12-06 13:43:20.600295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.522 [2024-12-06 13:43:20.600338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.522 [2024-12-06 13:43:20.600352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:48:27.522 [2024-12-06 13:43:20.600363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.522 [2024-12-06 13:43:20.600374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.522 [2024-12-06 13:43:20.600445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.522 [2024-12-06 13:43:20.600459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:27.522 [2024-12-06 13:43:20.600487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.522 [2024-12-06 13:43:20.600498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.522 [2024-12-06 13:43:20.600559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:27.522 [2024-12-06 13:43:20.600572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:27.522 [2024-12-06 13:43:20.600583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:27.522 [2024-12-06 13:43:20.600594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:27.522 [2024-12-06 13:43:20.600745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 663.770 ms, result 0 00:48:28.900 00:48:28.900 00:48:28.900 13:43:21 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:48:30.802 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:48:30.802 Process with pid 79985 is not found 00:48:30.802 Remove shared memory files 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79985 00:48:30.802 13:43:23 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79985 ']' 00:48:30.802 13:43:23 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79985 00:48:30.802 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79985) - No such process 00:48:30.802 13:43:23 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79985 is not found' 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:48:30.802 13:43:23 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:48:30.802 ************************************ 00:48:30.802 END TEST ftl_restore 00:48:30.802 ************************************ 00:48:30.802 00:48:30.802 real 2m53.959s 00:48:30.802 user 2m39.506s 00:48:30.802 sys 0m16.649s 00:48:30.802 13:43:23 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:30.802 13:43:23 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:48:30.802 13:43:23 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:48:30.802 13:43:23 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:48:30.802 13:43:23 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:30.802 13:43:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:48:30.802 ************************************ 00:48:30.802 START TEST ftl_dirty_shutdown 00:48:30.802 ************************************ 00:48:30.802 13:43:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:48:31.062 * Looking for test storage... 00:48:31.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:48:31.062 13:43:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:48:31.062 13:43:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:48:31.062 13:43:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:48:31.062 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:48:31.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:31.063 --rc genhtml_branch_coverage=1 00:48:31.063 --rc genhtml_function_coverage=1 00:48:31.063 --rc genhtml_legend=1 00:48:31.063 --rc geninfo_all_blocks=1 00:48:31.063 --rc geninfo_unexecuted_blocks=1 00:48:31.063 00:48:31.063 ' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:48:31.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:31.063 --rc genhtml_branch_coverage=1 00:48:31.063 --rc genhtml_function_coverage=1 00:48:31.063 --rc genhtml_legend=1 00:48:31.063 --rc geninfo_all_blocks=1 00:48:31.063 --rc geninfo_unexecuted_blocks=1 00:48:31.063 00:48:31.063 ' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:48:31.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:31.063 --rc genhtml_branch_coverage=1 00:48:31.063 --rc genhtml_function_coverage=1 00:48:31.063 --rc genhtml_legend=1 00:48:31.063 --rc geninfo_all_blocks=1 00:48:31.063 --rc geninfo_unexecuted_blocks=1 00:48:31.063 00:48:31.063 ' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:48:31.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:31.063 --rc genhtml_branch_coverage=1 00:48:31.063 --rc genhtml_function_coverage=1 00:48:31.063 --rc genhtml_legend=1 00:48:31.063 --rc geninfo_all_blocks=1 00:48:31.063 --rc geninfo_unexecuted_blocks=1 00:48:31.063 00:48:31.063 ' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81815 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81815 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81815 ']' 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:31.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:31.063 13:43:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:31.335 [2024-12-06 13:43:24.258638] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:48:31.335 [2024-12-06 13:43:24.259003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81815 ] 00:48:31.627 [2024-12-06 13:43:24.444586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:31.627 [2024-12-06 13:43:24.595841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:32.567 13:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:32.567 13:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:48:32.567 13:43:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:48:32.567 13:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:48:32.567 13:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:48:32.567 13:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:48:32.567 13:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:48:32.567 13:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:48:33.133 13:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:48:33.133 13:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:48:33.133 13:43:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:48:33.133 13:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:48:33.133 13:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:48:33.133 13:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:48:33.133 13:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:48:33.133 13:43:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:48:33.391 13:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:48:33.391 { 00:48:33.391 "name": "nvme0n1", 00:48:33.391 "aliases": [ 00:48:33.391 "102e034d-2ea7-453e-86ba-e03257e80ad9" 00:48:33.391 ], 00:48:33.391 "product_name": "NVMe disk", 00:48:33.391 "block_size": 4096, 00:48:33.391 "num_blocks": 1310720, 00:48:33.391 "uuid": "102e034d-2ea7-453e-86ba-e03257e80ad9", 00:48:33.391 "numa_id": -1, 00:48:33.391 "assigned_rate_limits": { 00:48:33.391 "rw_ios_per_sec": 0, 00:48:33.391 "rw_mbytes_per_sec": 0, 00:48:33.391 "r_mbytes_per_sec": 0, 00:48:33.391 "w_mbytes_per_sec": 0 00:48:33.391 }, 00:48:33.391 "claimed": true, 00:48:33.391 "claim_type": "read_many_write_one", 00:48:33.391 "zoned": false, 00:48:33.391 "supported_io_types": { 00:48:33.391 "read": true, 00:48:33.391 "write": true, 00:48:33.391 "unmap": true, 00:48:33.391 "flush": true, 00:48:33.391 "reset": true, 00:48:33.391 "nvme_admin": true, 00:48:33.392 "nvme_io": true, 00:48:33.392 "nvme_io_md": false, 00:48:33.392 "write_zeroes": true, 00:48:33.392 "zcopy": false, 00:48:33.392 "get_zone_info": false, 00:48:33.392 "zone_management": false, 00:48:33.392 "zone_append": false, 00:48:33.392 "compare": true, 00:48:33.392 "compare_and_write": false, 00:48:33.392 "abort": true, 00:48:33.392 "seek_hole": false, 00:48:33.392 "seek_data": false, 00:48:33.392 "copy": true, 00:48:33.392 "nvme_iov_md": false 00:48:33.392 }, 00:48:33.392 "driver_specific": { 00:48:33.392 "nvme": [ 00:48:33.392 { 00:48:33.392 "pci_address": "0000:00:11.0", 00:48:33.392 "trid": { 00:48:33.392 "trtype": "PCIe", 00:48:33.392 "traddr": "0000:00:11.0" 00:48:33.392 }, 00:48:33.392 "ctrlr_data": { 00:48:33.392 "cntlid": 0, 00:48:33.392 "vendor_id": "0x1b36", 00:48:33.392 "model_number": "QEMU NVMe Ctrl", 00:48:33.392 "serial_number": "12341", 00:48:33.392 "firmware_revision": "8.0.0", 00:48:33.392 "subnqn": "nqn.2019-08.org.qemu:12341", 00:48:33.392 "oacs": { 00:48:33.392 "security": 0, 00:48:33.392 "format": 1, 00:48:33.392 "firmware": 0, 00:48:33.392 "ns_manage": 1 00:48:33.392 }, 00:48:33.392 "multi_ctrlr": false, 00:48:33.392 "ana_reporting": false 00:48:33.392 }, 00:48:33.392 "vs": { 00:48:33.392 "nvme_version": "1.4" 00:48:33.392 }, 00:48:33.392 "ns_data": { 00:48:33.392 "id": 1, 00:48:33.392 "can_share": false 00:48:33.392 } 00:48:33.392 } 00:48:33.392 ], 00:48:33.392 "mp_policy": "active_passive" 00:48:33.392 } 00:48:33.392 } 00:48:33.392 ]' 00:48:33.392 13:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:48:33.392 13:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:48:33.392 13:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:48:33.392 13:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:48:33.392 13:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:48:33.392 13:43:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:48:33.392 13:43:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:48:33.392 13:43:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:48:33.392 13:43:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:48:33.392 13:43:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:48:33.392 13:43:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:48:33.650 13:43:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=120fa9f7-5b87-4f84-9af6-da3398e01b9a 00:48:33.650 13:43:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:48:33.650 13:43:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 120fa9f7-5b87-4f84-9af6-da3398e01b9a 00:48:33.907 13:43:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:48:34.165 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=3bd5adc2-1a71-456f-a1a7-e9515671ce6a 00:48:34.165 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3bd5adc2-1a71-456f-a1a7-e9515671ce6a 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:48:34.423 { 00:48:34.423 "name": "c9766fc1-4726-4c61-9f61-58ede6eb1998", 00:48:34.423 "aliases": [ 00:48:34.423 "lvs/nvme0n1p0" 00:48:34.423 ], 00:48:34.423 "product_name": "Logical Volume", 00:48:34.423 "block_size": 4096, 00:48:34.423 "num_blocks": 26476544, 00:48:34.423 "uuid": "c9766fc1-4726-4c61-9f61-58ede6eb1998", 00:48:34.423 "assigned_rate_limits": { 00:48:34.423 "rw_ios_per_sec": 0, 00:48:34.423 "rw_mbytes_per_sec": 0, 00:48:34.423 "r_mbytes_per_sec": 0, 00:48:34.423 "w_mbytes_per_sec": 0 00:48:34.423 }, 00:48:34.423 "claimed": false, 00:48:34.423 "zoned": false, 00:48:34.423 "supported_io_types": { 00:48:34.423 "read": true, 00:48:34.423 "write": true, 00:48:34.423 "unmap": true, 00:48:34.423 "flush": false, 00:48:34.423 "reset": true, 00:48:34.423 "nvme_admin": false, 00:48:34.423 "nvme_io": false, 00:48:34.423 "nvme_io_md": false, 00:48:34.423 "write_zeroes": true, 00:48:34.423 "zcopy": false, 00:48:34.423 "get_zone_info": false, 00:48:34.423 "zone_management": false, 00:48:34.423 "zone_append": false, 00:48:34.423 "compare": false, 00:48:34.423 "compare_and_write": false, 00:48:34.423 "abort": false, 00:48:34.423 "seek_hole": true, 00:48:34.423 "seek_data": true, 00:48:34.423 "copy": false, 00:48:34.423 "nvme_iov_md": false 00:48:34.423 }, 00:48:34.423 "driver_specific": { 00:48:34.423 "lvol": { 00:48:34.423 "lvol_store_uuid": "3bd5adc2-1a71-456f-a1a7-e9515671ce6a", 00:48:34.423 "base_bdev": "nvme0n1", 00:48:34.423 "thin_provision": true, 00:48:34.423 "num_allocated_clusters": 0, 00:48:34.423 "snapshot": false, 00:48:34.423 "clone": false, 00:48:34.423 "esnap_clone": false 00:48:34.423 } 00:48:34.423 } 00:48:34.423 } 00:48:34.423 ]' 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:48:34.423 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:48:34.681 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:48:34.681 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:48:34.681 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:48:34.681 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:48:34.681 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:48:34.681 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:48:34.681 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:48:34.940 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:48:34.940 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:48:34.940 13:43:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:34.940 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:34.940 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:48:34.940 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:48:34.940 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:48:34.940 13:43:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:35.198 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:48:35.198 { 00:48:35.198 "name": "c9766fc1-4726-4c61-9f61-58ede6eb1998", 00:48:35.198 "aliases": [ 00:48:35.198 "lvs/nvme0n1p0" 00:48:35.198 ], 00:48:35.198 "product_name": "Logical Volume", 00:48:35.198 "block_size": 4096, 00:48:35.198 "num_blocks": 26476544, 00:48:35.198 "uuid": "c9766fc1-4726-4c61-9f61-58ede6eb1998", 00:48:35.198 "assigned_rate_limits": { 00:48:35.198 "rw_ios_per_sec": 0, 00:48:35.198 "rw_mbytes_per_sec": 0, 00:48:35.198 "r_mbytes_per_sec": 0, 00:48:35.198 "w_mbytes_per_sec": 0 00:48:35.198 }, 00:48:35.198 "claimed": false, 00:48:35.198 "zoned": false, 00:48:35.198 "supported_io_types": { 00:48:35.198 "read": true, 00:48:35.198 "write": true, 00:48:35.198 "unmap": true, 00:48:35.198 "flush": false, 00:48:35.198 "reset": true, 00:48:35.198 "nvme_admin": false, 00:48:35.198 "nvme_io": false, 00:48:35.198 "nvme_io_md": false, 00:48:35.198 "write_zeroes": true, 00:48:35.198 "zcopy": false, 00:48:35.198 "get_zone_info": false, 00:48:35.198 "zone_management": false, 00:48:35.198 "zone_append": false, 00:48:35.198 "compare": false, 00:48:35.198 "compare_and_write": false, 00:48:35.198 "abort": false, 00:48:35.198 "seek_hole": true, 00:48:35.198 "seek_data": true, 00:48:35.198 "copy": false, 00:48:35.198 "nvme_iov_md": false 00:48:35.198 }, 00:48:35.198 "driver_specific": { 00:48:35.198 "lvol": { 00:48:35.198 "lvol_store_uuid": "3bd5adc2-1a71-456f-a1a7-e9515671ce6a", 00:48:35.198 "base_bdev": "nvme0n1", 00:48:35.198 "thin_provision": true, 00:48:35.198 "num_allocated_clusters": 0, 00:48:35.198 "snapshot": false, 00:48:35.198 "clone": false, 00:48:35.198 "esnap_clone": false 00:48:35.198 } 00:48:35.198 } 00:48:35.198 } 00:48:35.198 ]' 00:48:35.198 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:48:35.198 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:48:35.198 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:48:35.198 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:48:35.198 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:48:35.198 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:48:35.198 13:43:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:48:35.198 13:43:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:48:35.457 13:43:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:48:35.457 13:43:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:35.457 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:35.457 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:48:35.457 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:48:35.457 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:48:35.457 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c9766fc1-4726-4c61-9f61-58ede6eb1998 00:48:35.716 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:48:35.716 { 00:48:35.716 "name": "c9766fc1-4726-4c61-9f61-58ede6eb1998", 00:48:35.716 "aliases": [ 00:48:35.716 "lvs/nvme0n1p0" 00:48:35.716 ], 00:48:35.716 "product_name": "Logical Volume", 00:48:35.716 "block_size": 4096, 00:48:35.716 "num_blocks": 26476544, 00:48:35.716 "uuid": "c9766fc1-4726-4c61-9f61-58ede6eb1998", 00:48:35.716 "assigned_rate_limits": { 00:48:35.716 "rw_ios_per_sec": 0, 00:48:35.717 "rw_mbytes_per_sec": 0, 00:48:35.717 "r_mbytes_per_sec": 0, 00:48:35.717 "w_mbytes_per_sec": 0 00:48:35.717 }, 00:48:35.717 "claimed": false, 00:48:35.717 "zoned": false, 00:48:35.717 "supported_io_types": { 00:48:35.717 "read": true, 00:48:35.717 "write": true, 00:48:35.717 "unmap": true, 00:48:35.717 "flush": false, 00:48:35.717 "reset": true, 00:48:35.717 "nvme_admin": false, 00:48:35.717 "nvme_io": false, 00:48:35.717 "nvme_io_md": false, 00:48:35.717 "write_zeroes": true, 00:48:35.717 "zcopy": false, 00:48:35.717 "get_zone_info": false, 00:48:35.717 "zone_management": false, 00:48:35.717 "zone_append": false, 00:48:35.717 "compare": false, 00:48:35.717 "compare_and_write": false, 00:48:35.717 "abort": false, 00:48:35.717 "seek_hole": true, 00:48:35.717 "seek_data": true, 00:48:35.717 "copy": false, 00:48:35.717 "nvme_iov_md": false 00:48:35.717 }, 00:48:35.717 "driver_specific": { 00:48:35.717 "lvol": { 00:48:35.717 "lvol_store_uuid": "3bd5adc2-1a71-456f-a1a7-e9515671ce6a", 00:48:35.717 "base_bdev": "nvme0n1", 00:48:35.717 "thin_provision": true, 00:48:35.717 "num_allocated_clusters": 0, 00:48:35.717 "snapshot": false, 00:48:35.717 "clone": false, 00:48:35.717 "esnap_clone": false 00:48:35.717 } 00:48:35.717 } 00:48:35.717 } 00:48:35.717 ]' 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c9766fc1-4726-4c61-9f61-58ede6eb1998 --l2p_dram_limit 10' 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:48:35.717 13:43:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c9766fc1-4726-4c61-9f61-58ede6eb1998 --l2p_dram_limit 10 -c nvc0n1p0 00:48:35.977 [2024-12-06 13:43:28.891505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.977 [2024-12-06 13:43:28.891597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:48:35.977 [2024-12-06 13:43:28.891621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:48:35.977 [2024-12-06 13:43:28.891632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.977 [2024-12-06 13:43:28.891714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.977 [2024-12-06 13:43:28.891727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:35.977 [2024-12-06 13:43:28.891742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:48:35.977 [2024-12-06 13:43:28.891753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.977 [2024-12-06 13:43:28.891788] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:48:35.977 [2024-12-06 13:43:28.892944] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:48:35.977 [2024-12-06 13:43:28.892977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.977 [2024-12-06 13:43:28.892989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:35.977 [2024-12-06 13:43:28.893005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:48:35.977 [2024-12-06 13:43:28.893017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.977 [2024-12-06 13:43:28.893064] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 557a187b-33b3-4c37-87be-aa26a920e4a6 00:48:35.977 [2024-12-06 13:43:28.895787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.977 [2024-12-06 13:43:28.895831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:48:35.977 [2024-12-06 13:43:28.895845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:48:35.977 [2024-12-06 13:43:28.895859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.977 [2024-12-06 13:43:28.911139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.977 [2024-12-06 13:43:28.911371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:35.977 [2024-12-06 13:43:28.911428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.168 ms 00:48:35.977 [2024-12-06 13:43:28.911444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.977 [2024-12-06 13:43:28.911592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.977 [2024-12-06 13:43:28.911610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:35.977 [2024-12-06 13:43:28.911623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:48:35.977 [2024-12-06 13:43:28.911644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.977 [2024-12-06 13:43:28.911725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.977 [2024-12-06 13:43:28.911745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:48:35.977 [2024-12-06 13:43:28.911760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:48:35.977 [2024-12-06 13:43:28.911775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.977 [2024-12-06 13:43:28.911805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:48:35.977 [2024-12-06 13:43:28.917913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.977 [2024-12-06 13:43:28.918050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:35.977 [2024-12-06 13:43:28.918091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.112 ms 00:48:35.977 [2024-12-06 13:43:28.918103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.977 [2024-12-06 13:43:28.918151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.977 [2024-12-06 13:43:28.918163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:48:35.977 [2024-12-06 13:43:28.918178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:48:35.977 [2024-12-06 13:43:28.918188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.977 [2024-12-06 13:43:28.918229] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:48:35.977 [2024-12-06 13:43:28.918382] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:48:35.977 [2024-12-06 13:43:28.918406] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:48:35.977 [2024-12-06 13:43:28.918436] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:48:35.977 [2024-12-06 13:43:28.918456] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:48:35.977 [2024-12-06 13:43:28.918470] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:48:35.977 [2024-12-06 13:43:28.918486] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:48:35.977 [2024-12-06 13:43:28.918498] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:48:35.977 [2024-12-06 13:43:28.918518] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:48:35.977 [2024-12-06 13:43:28.918528] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:48:35.977 [2024-12-06 13:43:28.918543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.977 [2024-12-06 13:43:28.918566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:48:35.977 [2024-12-06 13:43:28.918582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:48:35.977 [2024-12-06 13:43:28.918593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.977 [2024-12-06 13:43:28.918679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.977 [2024-12-06 13:43:28.918691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:48:35.977 [2024-12-06 13:43:28.918705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:48:35.977 [2024-12-06 13:43:28.918716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.977 [2024-12-06 13:43:28.918818] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:48:35.977 [2024-12-06 13:43:28.918831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:48:35.977 [2024-12-06 13:43:28.918846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:35.977 [2024-12-06 13:43:28.918857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.977 [2024-12-06 13:43:28.918872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:48:35.977 [2024-12-06 13:43:28.918881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:48:35.977 [2024-12-06 13:43:28.918895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:48:35.977 [2024-12-06 13:43:28.918905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:48:35.977 [2024-12-06 13:43:28.918918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:48:35.977 [2024-12-06 13:43:28.918927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:35.977 [2024-12-06 13:43:28.918942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:48:35.977 [2024-12-06 13:43:28.918951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:48:35.977 [2024-12-06 13:43:28.918965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:35.977 [2024-12-06 13:43:28.918975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:48:35.977 [2024-12-06 13:43:28.918990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:48:35.977 [2024-12-06 13:43:28.919000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.977 [2024-12-06 13:43:28.919016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:48:35.977 [2024-12-06 13:43:28.919026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:48:35.977 [2024-12-06 13:43:28.919039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.977 [2024-12-06 13:43:28.919049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:48:35.977 [2024-12-06 13:43:28.919062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:48:35.977 [2024-12-06 13:43:28.919071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:35.977 [2024-12-06 13:43:28.919084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:48:35.977 [2024-12-06 13:43:28.919094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:48:35.977 [2024-12-06 13:43:28.919106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:35.977 [2024-12-06 13:43:28.919116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:48:35.977 [2024-12-06 13:43:28.919135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:48:35.977 [2024-12-06 13:43:28.919144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:35.977 [2024-12-06 13:43:28.919157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:48:35.977 [2024-12-06 13:43:28.919167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:48:35.977 [2024-12-06 13:43:28.919180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:35.977 [2024-12-06 13:43:28.919189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:48:35.978 [2024-12-06 13:43:28.919206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:48:35.978 [2024-12-06 13:43:28.919216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:35.978 [2024-12-06 13:43:28.919229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:48:35.978 [2024-12-06 13:43:28.919239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:48:35.978 [2024-12-06 13:43:28.919253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:35.978 [2024-12-06 13:43:28.919263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:48:35.978 [2024-12-06 13:43:28.919277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:48:35.978 [2024-12-06 13:43:28.919286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.978 [2024-12-06 13:43:28.919299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:48:35.978 [2024-12-06 13:43:28.919308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:48:35.978 [2024-12-06 13:43:28.919320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.978 [2024-12-06 13:43:28.919329] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:48:35.978 [2024-12-06 13:43:28.919343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:48:35.978 [2024-12-06 13:43:28.919353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:35.978 [2024-12-06 13:43:28.919368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:35.978 [2024-12-06 13:43:28.919380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:48:35.978 [2024-12-06 13:43:28.919405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:48:35.978 [2024-12-06 13:43:28.919416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:48:35.978 [2024-12-06 13:43:28.919430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:48:35.978 [2024-12-06 13:43:28.919439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:48:35.978 [2024-12-06 13:43:28.919452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:48:35.978 [2024-12-06 13:43:28.919464] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:48:35.978 [2024-12-06 13:43:28.919484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:35.978 [2024-12-06 13:43:28.919497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:48:35.978 [2024-12-06 13:43:28.919511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:48:35.978 [2024-12-06 13:43:28.919530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:48:35.978 [2024-12-06 13:43:28.919544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:48:35.978 [2024-12-06 13:43:28.919555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:48:35.978 [2024-12-06 13:43:28.919569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:48:35.978 [2024-12-06 13:43:28.919580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:48:35.978 [2024-12-06 13:43:28.919595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:48:35.978 [2024-12-06 13:43:28.919606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:48:35.978 [2024-12-06 13:43:28.919623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:48:35.978 [2024-12-06 13:43:28.919634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:48:35.978 [2024-12-06 13:43:28.919648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:48:35.978 [2024-12-06 13:43:28.919660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:48:35.978 [2024-12-06 13:43:28.919674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:48:35.978 [2024-12-06 13:43:28.919684] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:48:35.978 [2024-12-06 13:43:28.919699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:35.978 [2024-12-06 13:43:28.919710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:48:35.978 [2024-12-06 13:43:28.919724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:48:35.978 [2024-12-06 13:43:28.919735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:48:35.978 [2024-12-06 13:43:28.919751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:48:35.978 [2024-12-06 13:43:28.919763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.978 [2024-12-06 13:43:28.919777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:48:35.978 [2024-12-06 13:43:28.919788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:48:35.978 [2024-12-06 13:43:28.919803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.978 [2024-12-06 13:43:28.919852] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:48:35.978 [2024-12-06 13:43:28.919873] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:48:39.269 [2024-12-06 13:43:32.002470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.269 [2024-12-06 13:43:32.002558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:48:39.269 [2024-12-06 13:43:32.002577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3082.597 ms 00:48:39.269 [2024-12-06 13:43:32.002593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.269 [2024-12-06 13:43:32.051807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.269 [2024-12-06 13:43:32.052121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:39.269 [2024-12-06 13:43:32.052149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.857 ms 00:48:39.269 [2024-12-06 13:43:32.052165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.269 [2024-12-06 13:43:32.052342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.269 [2024-12-06 13:43:32.052360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:48:39.269 [2024-12-06 13:43:32.052373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:48:39.269 [2024-12-06 13:43:32.052413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.269 [2024-12-06 13:43:32.106741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.269 [2024-12-06 13:43:32.106797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:39.269 [2024-12-06 13:43:32.106813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.249 ms 00:48:39.269 [2024-12-06 13:43:32.106827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.270 [2024-12-06 13:43:32.106875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.270 [2024-12-06 13:43:32.106896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:39.270 [2024-12-06 13:43:32.106907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:48:39.270 [2024-12-06 13:43:32.106932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.270 [2024-12-06 13:43:32.107891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.270 [2024-12-06 13:43:32.107917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:39.270 [2024-12-06 13:43:32.107930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:48:39.270 [2024-12-06 13:43:32.107944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.270 [2024-12-06 13:43:32.108061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.270 [2024-12-06 13:43:32.108077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:39.270 [2024-12-06 13:43:32.108093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:48:39.270 [2024-12-06 13:43:32.108110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.270 [2024-12-06 13:43:32.133465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.270 [2024-12-06 13:43:32.133668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:39.270 [2024-12-06 13:43:32.133693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.332 ms 00:48:39.270 [2024-12-06 13:43:32.133724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.270 [2024-12-06 13:43:32.160303] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:48:39.270 [2024-12-06 13:43:32.165937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.270 [2024-12-06 13:43:32.166094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:48:39.270 [2024-12-06 13:43:32.166125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.092 ms 00:48:39.270 [2024-12-06 13:43:32.166138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.270 [2024-12-06 13:43:32.250671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.270 [2024-12-06 13:43:32.250757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:48:39.270 [2024-12-06 13:43:32.250782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.483 ms 00:48:39.270 [2024-12-06 13:43:32.250794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.270 [2024-12-06 13:43:32.251025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.270 [2024-12-06 13:43:32.251045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:48:39.270 [2024-12-06 13:43:32.251065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:48:39.270 [2024-12-06 13:43:32.251076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.270 [2024-12-06 13:43:32.287554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.270 [2024-12-06 13:43:32.287726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:48:39.270 [2024-12-06 13:43:32.287756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.400 ms 00:48:39.270 [2024-12-06 13:43:32.287768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.270 [2024-12-06 13:43:32.323260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.270 [2024-12-06 13:43:32.323440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:48:39.270 [2024-12-06 13:43:32.323469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.440 ms 00:48:39.270 [2024-12-06 13:43:32.323480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.270 [2024-12-06 13:43:32.324274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.270 [2024-12-06 13:43:32.324295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:48:39.270 [2024-12-06 13:43:32.324312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:48:39.270 [2024-12-06 13:43:32.324327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.529 [2024-12-06 13:43:32.424392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.529 [2024-12-06 13:43:32.424609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:48:39.529 [2024-12-06 13:43:32.424645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.998 ms 00:48:39.529 [2024-12-06 13:43:32.424659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.529 [2024-12-06 13:43:32.462980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.529 [2024-12-06 13:43:32.463146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:48:39.529 [2024-12-06 13:43:32.463193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.228 ms 00:48:39.529 [2024-12-06 13:43:32.463205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.529 [2024-12-06 13:43:32.498760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.529 [2024-12-06 13:43:32.498797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:48:39.529 [2024-12-06 13:43:32.498816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.489 ms 00:48:39.529 [2024-12-06 13:43:32.498827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.529 [2024-12-06 13:43:32.534630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.529 [2024-12-06 13:43:32.534665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:48:39.529 [2024-12-06 13:43:32.534683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.754 ms 00:48:39.529 [2024-12-06 13:43:32.534694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.529 [2024-12-06 13:43:32.534742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.529 [2024-12-06 13:43:32.534754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:48:39.529 [2024-12-06 13:43:32.534772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:48:39.529 [2024-12-06 13:43:32.534783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.529 [2024-12-06 13:43:32.534912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:39.529 [2024-12-06 13:43:32.534930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:48:39.529 [2024-12-06 13:43:32.534945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:48:39.529 [2024-12-06 13:43:32.534956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:39.529 [2024-12-06 13:43:32.536607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3644.489 ms, result 0 00:48:39.529 { 00:48:39.529 "name": "ftl0", 00:48:39.529 "uuid": "557a187b-33b3-4c37-87be-aa26a920e4a6" 00:48:39.529 } 00:48:39.529 13:43:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:48:39.529 13:43:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:48:39.788 13:43:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:48:39.788 13:43:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:48:39.788 13:43:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:48:40.047 /dev/nbd0 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:48:40.048 1+0 records in 00:48:40.048 1+0 records out 00:48:40.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361741 s, 11.3 MB/s 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:48:40.048 13:43:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:48:40.307 [2024-12-06 13:43:33.250790] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:48:40.307 [2024-12-06 13:43:33.251829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81964 ] 00:48:40.566 [2024-12-06 13:43:33.450256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:40.566 [2024-12-06 13:43:33.626656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:41.943  [2024-12-06T13:43:36.434Z] Copying: 194/1024 [MB] (194 MBps) [2024-12-06T13:43:37.372Z] Copying: 389/1024 [MB] (194 MBps) [2024-12-06T13:43:38.309Z] Copying: 584/1024 [MB] (195 MBps) [2024-12-06T13:43:39.248Z] Copying: 776/1024 [MB] (191 MBps) [2024-12-06T13:43:39.507Z] Copying: 965/1024 [MB] (189 MBps) [2024-12-06T13:43:40.884Z] Copying: 1024/1024 [MB] (average 192 MBps) 00:48:47.784 00:48:47.784 13:43:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:48:49.688 13:43:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:48:49.688 [2024-12-06 13:43:42.585170] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:48:49.688 [2024-12-06 13:43:42.585362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82058 ] 00:48:49.688 [2024-12-06 13:43:42.774511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:49.947 [2024-12-06 13:43:42.951362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:51.349  [2024-12-06T13:43:45.387Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-06T13:43:46.762Z] Copying: 35/1024 [MB] (18 MBps) [2024-12-06T13:43:47.697Z] Copying: 53/1024 [MB] (18 MBps) [2024-12-06T13:43:48.632Z] Copying: 71/1024 [MB] (17 MBps) [2024-12-06T13:43:49.569Z] Copying: 88/1024 [MB] (17 MBps) [2024-12-06T13:43:50.506Z] Copying: 106/1024 [MB] (17 MBps) [2024-12-06T13:43:51.444Z] Copying: 124/1024 [MB] (18 MBps) [2024-12-06T13:43:52.382Z] Copying: 141/1024 [MB] (17 MBps) [2024-12-06T13:43:53.759Z] Copying: 157/1024 [MB] (16 MBps) [2024-12-06T13:43:54.696Z] Copying: 174/1024 [MB] (16 MBps) [2024-12-06T13:43:55.671Z] Copying: 191/1024 [MB] (16 MBps) [2024-12-06T13:43:56.606Z] Copying: 207/1024 [MB] (16 MBps) [2024-12-06T13:43:57.544Z] Copying: 224/1024 [MB] (16 MBps) [2024-12-06T13:43:58.481Z] Copying: 240/1024 [MB] (16 MBps) [2024-12-06T13:43:59.418Z] Copying: 257/1024 [MB] (16 MBps) [2024-12-06T13:44:00.355Z] Copying: 274/1024 [MB] (16 MBps) [2024-12-06T13:44:01.733Z] Copying: 290/1024 [MB] (16 MBps) [2024-12-06T13:44:02.670Z] Copying: 307/1024 [MB] (16 MBps) [2024-12-06T13:44:03.608Z] Copying: 323/1024 [MB] (16 MBps) [2024-12-06T13:44:04.544Z] Copying: 340/1024 [MB] (16 MBps) [2024-12-06T13:44:05.480Z] Copying: 356/1024 [MB] (16 MBps) [2024-12-06T13:44:06.412Z] Copying: 373/1024 [MB] (16 MBps) [2024-12-06T13:44:07.789Z] Copying: 390/1024 [MB] (17 MBps) [2024-12-06T13:44:08.375Z] Copying: 407/1024 [MB] (17 MBps) [2024-12-06T13:44:09.774Z] Copying: 424/1024 [MB] (16 MBps) [2024-12-06T13:44:10.711Z] Copying: 441/1024 [MB] (16 MBps) [2024-12-06T13:44:11.650Z] Copying: 458/1024 [MB] (16 MBps) [2024-12-06T13:44:12.586Z] Copying: 474/1024 [MB] (16 MBps) [2024-12-06T13:44:13.523Z] Copying: 491/1024 [MB] (16 MBps) [2024-12-06T13:44:14.462Z] Copying: 508/1024 [MB] (16 MBps) [2024-12-06T13:44:15.398Z] Copying: 525/1024 [MB] (16 MBps) [2024-12-06T13:44:16.358Z] Copying: 541/1024 [MB] (16 MBps) [2024-12-06T13:44:17.734Z] Copying: 558/1024 [MB] (16 MBps) [2024-12-06T13:44:18.671Z] Copying: 575/1024 [MB] (16 MBps) [2024-12-06T13:44:19.609Z] Copying: 592/1024 [MB] (16 MBps) [2024-12-06T13:44:20.546Z] Copying: 608/1024 [MB] (16 MBps) [2024-12-06T13:44:21.483Z] Copying: 625/1024 [MB] (16 MBps) [2024-12-06T13:44:22.420Z] Copying: 642/1024 [MB] (16 MBps) [2024-12-06T13:44:23.382Z] Copying: 659/1024 [MB] (16 MBps) [2024-12-06T13:44:24.761Z] Copying: 676/1024 [MB] (16 MBps) [2024-12-06T13:44:25.694Z] Copying: 693/1024 [MB] (16 MBps) [2024-12-06T13:44:26.624Z] Copying: 709/1024 [MB] (16 MBps) [2024-12-06T13:44:27.558Z] Copying: 726/1024 [MB] (16 MBps) [2024-12-06T13:44:28.514Z] Copying: 743/1024 [MB] (17 MBps) [2024-12-06T13:44:29.448Z] Copying: 760/1024 [MB] (16 MBps) [2024-12-06T13:44:30.381Z] Copying: 777/1024 [MB] (16 MBps) [2024-12-06T13:44:31.757Z] Copying: 794/1024 [MB] (16 MBps) [2024-12-06T13:44:32.693Z] Copying: 810/1024 [MB] (16 MBps) [2024-12-06T13:44:33.630Z] Copying: 827/1024 [MB] (17 MBps) [2024-12-06T13:44:34.566Z] Copying: 844/1024 [MB] (17 MBps) [2024-12-06T13:44:35.500Z] Copying: 862/1024 [MB] (17 MBps) [2024-12-06T13:44:36.432Z] Copying: 880/1024 [MB] (17 MBps) [2024-12-06T13:44:37.381Z] Copying: 897/1024 [MB] (17 MBps) [2024-12-06T13:44:38.759Z] Copying: 914/1024 [MB] (16 MBps) [2024-12-06T13:44:39.696Z] Copying: 931/1024 [MB] (16 MBps) [2024-12-06T13:44:40.634Z] Copying: 948/1024 [MB] (16 MBps) [2024-12-06T13:44:41.571Z] Copying: 964/1024 [MB] (16 MBps) [2024-12-06T13:44:42.509Z] Copying: 981/1024 [MB] (16 MBps) [2024-12-06T13:44:43.445Z] Copying: 997/1024 [MB] (16 MBps) [2024-12-06T13:44:44.012Z] Copying: 1014/1024 [MB] (16 MBps) [2024-12-06T13:44:45.388Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:49:52.288 00:49:52.288 13:44:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:49:52.288 13:44:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:49:52.546 13:44:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:49:52.805 [2024-12-06 13:44:45.695846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:52.805 [2024-12-06 13:44:45.696160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:49:52.805 [2024-12-06 13:44:45.696190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:49:52.805 [2024-12-06 13:44:45.696206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:52.805 [2024-12-06 13:44:45.696256] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:49:52.805 [2024-12-06 13:44:45.701217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:52.805 [2024-12-06 13:44:45.701363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:49:52.805 [2024-12-06 13:44:45.701476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.932 ms 00:49:52.805 [2024-12-06 13:44:45.701515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:52.805 [2024-12-06 13:44:45.703712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:52.805 [2024-12-06 13:44:45.703856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:49:52.805 [2024-12-06 13:44:45.703946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:49:52.805 [2024-12-06 13:44:45.703993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:52.805 [2024-12-06 13:44:45.720993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:52.805 [2024-12-06 13:44:45.721160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:49:52.805 [2024-12-06 13:44:45.721253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.940 ms 00:49:52.805 [2024-12-06 13:44:45.721292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:52.805 [2024-12-06 13:44:45.726465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:52.805 [2024-12-06 13:44:45.726610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:49:52.805 [2024-12-06 13:44:45.726761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.108 ms 00:49:52.805 [2024-12-06 13:44:45.726799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:52.805 [2024-12-06 13:44:45.764321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:52.805 [2024-12-06 13:44:45.764494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:49:52.805 [2024-12-06 13:44:45.764523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.411 ms 00:49:52.805 [2024-12-06 13:44:45.764535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:52.805 [2024-12-06 13:44:45.787600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:52.805 [2024-12-06 13:44:45.787755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:49:52.805 [2024-12-06 13:44:45.787788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.013 ms 00:49:52.805 [2024-12-06 13:44:45.787800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:52.805 [2024-12-06 13:44:45.787981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:52.805 [2024-12-06 13:44:45.787996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:49:52.805 [2024-12-06 13:44:45.788012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:49:52.805 [2024-12-06 13:44:45.788023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:52.805 [2024-12-06 13:44:45.825462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:52.805 [2024-12-06 13:44:45.825500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:49:52.805 [2024-12-06 13:44:45.825534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.414 ms 00:49:52.805 [2024-12-06 13:44:45.825545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:52.805 [2024-12-06 13:44:45.861594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:52.805 [2024-12-06 13:44:45.861651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:49:52.805 [2024-12-06 13:44:45.861670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.000 ms 00:49:52.805 [2024-12-06 13:44:45.861680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:52.805 [2024-12-06 13:44:45.897220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:52.805 [2024-12-06 13:44:45.897256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:49:52.805 [2024-12-06 13:44:45.897272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.489 ms 00:49:52.805 [2024-12-06 13:44:45.897298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.064 [2024-12-06 13:44:45.933169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:53.064 [2024-12-06 13:44:45.933205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:49:53.064 [2024-12-06 13:44:45.933221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.741 ms 00:49:53.064 [2024-12-06 13:44:45.933247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.064 [2024-12-06 13:44:45.933292] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:49:53.064 [2024-12-06 13:44:45.933311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:49:53.064 [2024-12-06 13:44:45.933329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:49:53.064 [2024-12-06 13:44:45.933341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:49:53.064 [2024-12-06 13:44:45.933356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:53.064 [2024-12-06 13:44:45.933367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:53.064 [2024-12-06 13:44:45.933382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.933989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:49:53.065 [2024-12-06 13:44:45.934608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:49:53.066 [2024-12-06 13:44:45.934623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:49:53.066 [2024-12-06 13:44:45.934634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:49:53.066 [2024-12-06 13:44:45.934648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:49:53.066 [2024-12-06 13:44:45.934659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:49:53.066 [2024-12-06 13:44:45.934675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:49:53.066 [2024-12-06 13:44:45.934694] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:49:53.066 [2024-12-06 13:44:45.934709] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 557a187b-33b3-4c37-87be-aa26a920e4a6 00:49:53.066 [2024-12-06 13:44:45.934721] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:49:53.066 [2024-12-06 13:44:45.934738] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:49:53.066 [2024-12-06 13:44:45.934752] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:49:53.066 [2024-12-06 13:44:45.934766] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:49:53.066 [2024-12-06 13:44:45.934776] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:49:53.066 [2024-12-06 13:44:45.934791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:49:53.066 [2024-12-06 13:44:45.934801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:49:53.066 [2024-12-06 13:44:45.934814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:49:53.066 [2024-12-06 13:44:45.934823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:49:53.066 [2024-12-06 13:44:45.934836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:53.066 [2024-12-06 13:44:45.934847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:49:53.066 [2024-12-06 13:44:45.934861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:49:53.066 [2024-12-06 13:44:45.934871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.066 [2024-12-06 13:44:45.957024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:53.066 [2024-12-06 13:44:45.957062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:49:53.066 [2024-12-06 13:44:45.957079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.092 ms 00:49:53.066 [2024-12-06 13:44:45.957090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.066 [2024-12-06 13:44:45.957726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:53.066 [2024-12-06 13:44:45.957739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:49:53.066 [2024-12-06 13:44:45.957753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:49:53.066 [2024-12-06 13:44:45.957763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.066 [2024-12-06 13:44:46.033341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.066 [2024-12-06 13:44:46.033643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:53.066 [2024-12-06 13:44:46.033735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.066 [2024-12-06 13:44:46.033773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.066 [2024-12-06 13:44:46.033903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.066 [2024-12-06 13:44:46.033979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:53.066 [2024-12-06 13:44:46.034023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.066 [2024-12-06 13:44:46.034055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.066 [2024-12-06 13:44:46.034295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.066 [2024-12-06 13:44:46.034421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:53.066 [2024-12-06 13:44:46.034510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.066 [2024-12-06 13:44:46.034549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.066 [2024-12-06 13:44:46.034610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.066 [2024-12-06 13:44:46.034729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:53.066 [2024-12-06 13:44:46.034792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.066 [2024-12-06 13:44:46.034824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.325 [2024-12-06 13:44:46.173406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.325 [2024-12-06 13:44:46.173726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:53.325 [2024-12-06 13:44:46.173916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.325 [2024-12-06 13:44:46.173954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.325 [2024-12-06 13:44:46.281547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.325 [2024-12-06 13:44:46.281836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:53.325 [2024-12-06 13:44:46.281991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.325 [2024-12-06 13:44:46.282032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.325 [2024-12-06 13:44:46.282212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.325 [2024-12-06 13:44:46.282293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:53.325 [2024-12-06 13:44:46.282374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.325 [2024-12-06 13:44:46.282433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.325 [2024-12-06 13:44:46.282575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.325 [2024-12-06 13:44:46.282775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:53.325 [2024-12-06 13:44:46.282825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.325 [2024-12-06 13:44:46.282859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.325 [2024-12-06 13:44:46.283057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.325 [2024-12-06 13:44:46.283107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:53.325 [2024-12-06 13:44:46.283158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.325 [2024-12-06 13:44:46.283288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.325 [2024-12-06 13:44:46.283381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.325 [2024-12-06 13:44:46.283443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:49:53.325 [2024-12-06 13:44:46.283482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.325 [2024-12-06 13:44:46.283618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.325 [2024-12-06 13:44:46.283679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.325 [2024-12-06 13:44:46.283693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:53.325 [2024-12-06 13:44:46.283709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.325 [2024-12-06 13:44:46.283725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.325 [2024-12-06 13:44:46.283789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:53.325 [2024-12-06 13:44:46.283802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:53.325 [2024-12-06 13:44:46.283818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:53.325 [2024-12-06 13:44:46.283829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:53.325 [2024-12-06 13:44:46.284011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 588.107 ms, result 0 00:49:53.325 true 00:49:53.325 13:44:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81815 00:49:53.325 13:44:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81815 00:49:53.325 13:44:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:49:53.584 [2024-12-06 13:44:46.449060] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:49:53.584 [2024-12-06 13:44:46.449251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82704 ] 00:49:53.584 [2024-12-06 13:44:46.639036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:53.842 [2024-12-06 13:44:46.785073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:55.220  [2024-12-06T13:44:49.255Z] Copying: 192/1024 [MB] (192 MBps) [2024-12-06T13:44:50.189Z] Copying: 391/1024 [MB] (198 MBps) [2024-12-06T13:44:51.572Z] Copying: 589/1024 [MB] (198 MBps) [2024-12-06T13:44:52.509Z] Copying: 786/1024 [MB] (196 MBps) [2024-12-06T13:44:52.509Z] Copying: 978/1024 [MB] (192 MBps) [2024-12-06T13:44:53.890Z] Copying: 1024/1024 [MB] (average 195 MBps) 00:50:00.790 00:50:00.790 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81815 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:50:00.790 13:44:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:50:00.790 [2024-12-06 13:44:53.832590] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:50:00.790 [2024-12-06 13:44:53.832813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82779 ] 00:50:01.058 [2024-12-06 13:44:54.023429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:01.362 [2024-12-06 13:44:54.177460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:01.621 [2024-12-06 13:44:54.625799] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:01.621 [2024-12-06 13:44:54.625892] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:01.621 [2024-12-06 13:44:54.693494] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:50:01.621 [2024-12-06 13:44:54.693824] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:50:01.621 [2024-12-06 13:44:54.694038] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:50:01.881 [2024-12-06 13:44:54.968171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:01.881 [2024-12-06 13:44:54.968248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:01.881 [2024-12-06 13:44:54.968283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:01.881 [2024-12-06 13:44:54.968298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:01.881 [2024-12-06 13:44:54.968359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:01.881 [2024-12-06 13:44:54.968373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:01.881 [2024-12-06 13:44:54.968384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:50:01.881 [2024-12-06 13:44:54.968395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:01.881 [2024-12-06 13:44:54.968436] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:01.881 [2024-12-06 13:44:54.969394] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:01.881 [2024-12-06 13:44:54.969432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:01.881 [2024-12-06 13:44:54.969444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:01.881 [2024-12-06 13:44:54.969455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:50:01.881 [2024-12-06 13:44:54.969466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:01.881 [2024-12-06 13:44:54.972126] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:50:02.141 [2024-12-06 13:44:54.992309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.141 [2024-12-06 13:44:54.992350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:50:02.141 [2024-12-06 13:44:54.992368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.184 ms 00:50:02.141 [2024-12-06 13:44:54.992380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.141 [2024-12-06 13:44:54.992482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.141 [2024-12-06 13:44:54.992497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:50:02.141 [2024-12-06 13:44:54.992510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:50:02.141 [2024-12-06 13:44:54.992521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.141 [2024-12-06 13:44:55.005643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.141 [2024-12-06 13:44:55.005676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:02.141 [2024-12-06 13:44:55.005691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.042 ms 00:50:02.141 [2024-12-06 13:44:55.005719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.141 [2024-12-06 13:44:55.005815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.141 [2024-12-06 13:44:55.005830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:02.141 [2024-12-06 13:44:55.005843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:50:02.141 [2024-12-06 13:44:55.005855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.141 [2024-12-06 13:44:55.005931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.141 [2024-12-06 13:44:55.005944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:02.141 [2024-12-06 13:44:55.005956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:50:02.141 [2024-12-06 13:44:55.005968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.141 [2024-12-06 13:44:55.005999] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:02.142 [2024-12-06 13:44:55.012078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.142 [2024-12-06 13:44:55.012238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:02.142 [2024-12-06 13:44:55.012260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.089 ms 00:50:02.142 [2024-12-06 13:44:55.012272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.142 [2024-12-06 13:44:55.012316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.142 [2024-12-06 13:44:55.012329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:02.142 [2024-12-06 13:44:55.012341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:50:02.142 [2024-12-06 13:44:55.012352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.142 [2024-12-06 13:44:55.012412] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:50:02.142 [2024-12-06 13:44:55.012442] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:50:02.142 [2024-12-06 13:44:55.012482] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:50:02.142 [2024-12-06 13:44:55.012501] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:50:02.142 [2024-12-06 13:44:55.012599] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:50:02.142 [2024-12-06 13:44:55.012614] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:02.142 [2024-12-06 13:44:55.012629] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:50:02.142 [2024-12-06 13:44:55.012647] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:02.142 [2024-12-06 13:44:55.012661] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:02.142 [2024-12-06 13:44:55.012673] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:50:02.142 [2024-12-06 13:44:55.012684] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:02.142 [2024-12-06 13:44:55.012695] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:50:02.142 [2024-12-06 13:44:55.012706] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:50:02.142 [2024-12-06 13:44:55.012718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.142 [2024-12-06 13:44:55.012729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:02.142 [2024-12-06 13:44:55.012740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:50:02.142 [2024-12-06 13:44:55.012751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.142 [2024-12-06 13:44:55.012826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.142 [2024-12-06 13:44:55.012851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:02.142 [2024-12-06 13:44:55.012862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:50:02.142 [2024-12-06 13:44:55.012873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.142 [2024-12-06 13:44:55.012976] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:02.142 [2024-12-06 13:44:55.012991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:02.142 [2024-12-06 13:44:55.013003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:02.142 [2024-12-06 13:44:55.013014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:02.142 [2024-12-06 13:44:55.013036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:50:02.142 [2024-12-06 13:44:55.013056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:02.142 [2024-12-06 13:44:55.013067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:02.142 [2024-12-06 13:44:55.013100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:02.142 [2024-12-06 13:44:55.013113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:50:02.142 [2024-12-06 13:44:55.013124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:02.142 [2024-12-06 13:44:55.013134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:02.142 [2024-12-06 13:44:55.013144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:50:02.142 [2024-12-06 13:44:55.013155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:02.142 [2024-12-06 13:44:55.013175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:50:02.142 [2024-12-06 13:44:55.013185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:02.142 [2024-12-06 13:44:55.013206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:02.142 [2024-12-06 13:44:55.013225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:02.142 [2024-12-06 13:44:55.013236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:02.142 [2024-12-06 13:44:55.013255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:02.142 [2024-12-06 13:44:55.013265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:02.142 [2024-12-06 13:44:55.013284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:02.142 [2024-12-06 13:44:55.013294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:02.142 [2024-12-06 13:44:55.013313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:02.142 [2024-12-06 13:44:55.013323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:02.142 [2024-12-06 13:44:55.013342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:02.142 [2024-12-06 13:44:55.013351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:50:02.142 [2024-12-06 13:44:55.013361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:02.142 [2024-12-06 13:44:55.013370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:50:02.142 [2024-12-06 13:44:55.013380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:50:02.142 [2024-12-06 13:44:55.013391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:50:02.142 [2024-12-06 13:44:55.013421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:50:02.142 [2024-12-06 13:44:55.013431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013443] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:02.142 [2024-12-06 13:44:55.013454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:02.142 [2024-12-06 13:44:55.013469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:02.142 [2024-12-06 13:44:55.013480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:02.142 [2024-12-06 13:44:55.013491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:02.142 [2024-12-06 13:44:55.013501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:02.142 [2024-12-06 13:44:55.013512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:02.142 [2024-12-06 13:44:55.013522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:02.142 [2024-12-06 13:44:55.013532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:02.142 [2024-12-06 13:44:55.013543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:02.142 [2024-12-06 13:44:55.013554] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:02.142 [2024-12-06 13:44:55.013567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:02.142 [2024-12-06 13:44:55.013579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:50:02.142 [2024-12-06 13:44:55.013590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:50:02.142 [2024-12-06 13:44:55.013601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:50:02.142 [2024-12-06 13:44:55.013612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:50:02.142 [2024-12-06 13:44:55.013623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:50:02.142 [2024-12-06 13:44:55.013633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:50:02.142 [2024-12-06 13:44:55.013645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:50:02.142 [2024-12-06 13:44:55.013656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:50:02.142 [2024-12-06 13:44:55.013677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:50:02.142 [2024-12-06 13:44:55.013688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:50:02.142 [2024-12-06 13:44:55.013699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:50:02.142 [2024-12-06 13:44:55.013709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:50:02.142 [2024-12-06 13:44:55.013720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:50:02.142 [2024-12-06 13:44:55.013731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:50:02.142 [2024-12-06 13:44:55.013742] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:02.142 [2024-12-06 13:44:55.013754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:02.142 [2024-12-06 13:44:55.013772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:02.142 [2024-12-06 13:44:55.013783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:02.142 [2024-12-06 13:44:55.013794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:02.142 [2024-12-06 13:44:55.013805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:02.142 [2024-12-06 13:44:55.013817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.142 [2024-12-06 13:44:55.013828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:02.142 [2024-12-06 13:44:55.013840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:50:02.143 [2024-12-06 13:44:55.013850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.143 [2024-12-06 13:44:55.066283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.143 [2024-12-06 13:44:55.066336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:02.143 [2024-12-06 13:44:55.066354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.371 ms 00:50:02.143 [2024-12-06 13:44:55.066366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.143 [2024-12-06 13:44:55.066492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.143 [2024-12-06 13:44:55.066507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:02.143 [2024-12-06 13:44:55.066519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:50:02.143 [2024-12-06 13:44:55.066530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.143 [2024-12-06 13:44:55.135506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.143 [2024-12-06 13:44:55.135558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:02.143 [2024-12-06 13:44:55.135579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.865 ms 00:50:02.143 [2024-12-06 13:44:55.135592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.143 [2024-12-06 13:44:55.135656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.143 [2024-12-06 13:44:55.135668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:02.143 [2024-12-06 13:44:55.135680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:02.143 [2024-12-06 13:44:55.135691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.143 [2024-12-06 13:44:55.136553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.143 [2024-12-06 13:44:55.136570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:02.143 [2024-12-06 13:44:55.136583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:50:02.143 [2024-12-06 13:44:55.136602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.143 [2024-12-06 13:44:55.136748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.143 [2024-12-06 13:44:55.136763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:02.143 [2024-12-06 13:44:55.136774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:50:02.143 [2024-12-06 13:44:55.136785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.143 [2024-12-06 13:44:55.160471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.143 [2024-12-06 13:44:55.160689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:02.143 [2024-12-06 13:44:55.160715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.662 ms 00:50:02.143 [2024-12-06 13:44:55.160728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.143 [2024-12-06 13:44:55.181816] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:50:02.143 [2024-12-06 13:44:55.181855] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:50:02.143 [2024-12-06 13:44:55.181871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.143 [2024-12-06 13:44:55.181899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:50:02.143 [2024-12-06 13:44:55.181911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.991 ms 00:50:02.143 [2024-12-06 13:44:55.181922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.143 [2024-12-06 13:44:55.212197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.143 [2024-12-06 13:44:55.212240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:50:02.143 [2024-12-06 13:44:55.212255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.229 ms 00:50:02.143 [2024-12-06 13:44:55.212267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.143 [2024-12-06 13:44:55.230836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.143 [2024-12-06 13:44:55.230876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:50:02.143 [2024-12-06 13:44:55.230890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.515 ms 00:50:02.143 [2024-12-06 13:44:55.230901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.401 [2024-12-06 13:44:55.249671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.401 [2024-12-06 13:44:55.249712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:50:02.401 [2024-12-06 13:44:55.249726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.728 ms 00:50:02.401 [2024-12-06 13:44:55.249737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.401 [2024-12-06 13:44:55.250601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.401 [2024-12-06 13:44:55.250632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:02.401 [2024-12-06 13:44:55.250646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:50:02.401 [2024-12-06 13:44:55.250657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.401 [2024-12-06 13:44:55.351419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.401 [2024-12-06 13:44:55.351667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:50:02.401 [2024-12-06 13:44:55.351695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.734 ms 00:50:02.401 [2024-12-06 13:44:55.351708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.401 [2024-12-06 13:44:55.363859] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:50:02.401 [2024-12-06 13:44:55.369120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.401 [2024-12-06 13:44:55.369154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:02.401 [2024-12-06 13:44:55.369171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.326 ms 00:50:02.401 [2024-12-06 13:44:55.369188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.401 [2024-12-06 13:44:55.369318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.401 [2024-12-06 13:44:55.369333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:50:02.401 [2024-12-06 13:44:55.369346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:50:02.401 [2024-12-06 13:44:55.369357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.401 [2024-12-06 13:44:55.369468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.401 [2024-12-06 13:44:55.369483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:02.401 [2024-12-06 13:44:55.369494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:50:02.401 [2024-12-06 13:44:55.369506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.401 [2024-12-06 13:44:55.369541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.401 [2024-12-06 13:44:55.369554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:02.401 [2024-12-06 13:44:55.369565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:50:02.401 [2024-12-06 13:44:55.369575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.401 [2024-12-06 13:44:55.369616] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:50:02.401 [2024-12-06 13:44:55.369630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.401 [2024-12-06 13:44:55.369642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:50:02.401 [2024-12-06 13:44:55.369653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:50:02.401 [2024-12-06 13:44:55.369669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.401 [2024-12-06 13:44:55.408643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.401 [2024-12-06 13:44:55.408689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:02.401 [2024-12-06 13:44:55.408707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.950 ms 00:50:02.401 [2024-12-06 13:44:55.408719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.401 [2024-12-06 13:44:55.408802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:02.401 [2024-12-06 13:44:55.408815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:02.401 [2024-12-06 13:44:55.408828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:50:02.401 [2024-12-06 13:44:55.408839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:02.401 [2024-12-06 13:44:55.410435] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 441.687 ms, result 0 00:50:03.336  [2024-12-06T13:44:57.813Z] Copying: 28/1024 [MB] (28 MBps) [2024-12-06T13:44:58.749Z] Copying: 56/1024 [MB] (27 MBps) [2024-12-06T13:44:59.687Z] Copying: 83/1024 [MB] (27 MBps) [2024-12-06T13:45:00.625Z] Copying: 112/1024 [MB] (28 MBps) [2024-12-06T13:45:01.563Z] Copying: 140/1024 [MB] (27 MBps) [2024-12-06T13:45:02.498Z] Copying: 172/1024 [MB] (32 MBps) [2024-12-06T13:45:03.433Z] Copying: 203/1024 [MB] (30 MBps) [2024-12-06T13:45:04.808Z] Copying: 233/1024 [MB] (30 MBps) [2024-12-06T13:45:05.742Z] Copying: 261/1024 [MB] (27 MBps) [2024-12-06T13:45:06.677Z] Copying: 290/1024 [MB] (28 MBps) [2024-12-06T13:45:07.640Z] Copying: 317/1024 [MB] (27 MBps) [2024-12-06T13:45:08.575Z] Copying: 346/1024 [MB] (28 MBps) [2024-12-06T13:45:09.509Z] Copying: 374/1024 [MB] (28 MBps) [2024-12-06T13:45:10.445Z] Copying: 401/1024 [MB] (27 MBps) [2024-12-06T13:45:11.818Z] Copying: 428/1024 [MB] (26 MBps) [2024-12-06T13:45:12.755Z] Copying: 456/1024 [MB] (27 MBps) [2024-12-06T13:45:13.691Z] Copying: 483/1024 [MB] (26 MBps) [2024-12-06T13:45:14.628Z] Copying: 511/1024 [MB] (27 MBps) [2024-12-06T13:45:15.564Z] Copying: 538/1024 [MB] (27 MBps) [2024-12-06T13:45:16.499Z] Copying: 566/1024 [MB] (27 MBps) [2024-12-06T13:45:17.437Z] Copying: 594/1024 [MB] (27 MBps) [2024-12-06T13:45:18.814Z] Copying: 621/1024 [MB] (27 MBps) [2024-12-06T13:45:19.751Z] Copying: 648/1024 [MB] (27 MBps) [2024-12-06T13:45:20.688Z] Copying: 676/1024 [MB] (27 MBps) [2024-12-06T13:45:21.638Z] Copying: 704/1024 [MB] (28 MBps) [2024-12-06T13:45:22.584Z] Copying: 732/1024 [MB] (28 MBps) [2024-12-06T13:45:23.520Z] Copying: 761/1024 [MB] (28 MBps) [2024-12-06T13:45:24.457Z] Copying: 788/1024 [MB] (27 MBps) [2024-12-06T13:45:25.832Z] Copying: 816/1024 [MB] (27 MBps) [2024-12-06T13:45:26.766Z] Copying: 844/1024 [MB] (28 MBps) [2024-12-06T13:45:27.703Z] Copying: 873/1024 [MB] (28 MBps) [2024-12-06T13:45:28.639Z] Copying: 901/1024 [MB] (28 MBps) [2024-12-06T13:45:29.572Z] Copying: 929/1024 [MB] (28 MBps) [2024-12-06T13:45:30.505Z] Copying: 956/1024 [MB] (27 MBps) [2024-12-06T13:45:31.438Z] Copying: 984/1024 [MB] (28 MBps) [2024-12-06T13:45:32.003Z] Copying: 1013/1024 [MB] (28 MBps) [2024-12-06T13:45:32.003Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-12-06 13:45:31.819139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.903 [2024-12-06 13:45:31.819316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:50:38.903 [2024-12-06 13:45:31.819343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:50:38.903 [2024-12-06 13:45:31.819365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.903 [2024-12-06 13:45:31.819413] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:38.903 [2024-12-06 13:45:31.824374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.903 [2024-12-06 13:45:31.824410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:50:38.903 [2024-12-06 13:45:31.824424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.939 ms 00:50:38.903 [2024-12-06 13:45:31.824436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.903 [2024-12-06 13:45:31.827384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.903 [2024-12-06 13:45:31.827438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:50:38.903 [2024-12-06 13:45:31.827453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.923 ms 00:50:38.903 [2024-12-06 13:45:31.827465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.903 [2024-12-06 13:45:31.842924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.903 [2024-12-06 13:45:31.843084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:50:38.903 [2024-12-06 13:45:31.843107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.423 ms 00:50:38.903 [2024-12-06 13:45:31.843120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.903 [2024-12-06 13:45:31.848288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.903 [2024-12-06 13:45:31.848322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:50:38.903 [2024-12-06 13:45:31.848335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.133 ms 00:50:38.903 [2024-12-06 13:45:31.848346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.903 [2024-12-06 13:45:31.885511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.903 [2024-12-06 13:45:31.885550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:50:38.903 [2024-12-06 13:45:31.885564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.082 ms 00:50:38.903 [2024-12-06 13:45:31.885574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.903 [2024-12-06 13:45:31.909224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.903 [2024-12-06 13:45:31.909267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:50:38.903 [2024-12-06 13:45:31.909281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.611 ms 00:50:38.903 [2024-12-06 13:45:31.909292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.903 [2024-12-06 13:45:31.910343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.903 [2024-12-06 13:45:31.910380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:50:38.903 [2024-12-06 13:45:31.910394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:50:38.903 [2024-12-06 13:45:31.910419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.903 [2024-12-06 13:45:31.947867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.903 [2024-12-06 13:45:31.947903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:50:38.903 [2024-12-06 13:45:31.947916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.429 ms 00:50:38.903 [2024-12-06 13:45:31.947957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.903 [2024-12-06 13:45:31.984135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.903 [2024-12-06 13:45:31.984172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:50:38.903 [2024-12-06 13:45:31.984185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.139 ms 00:50:38.903 [2024-12-06 13:45:31.984211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.163 [2024-12-06 13:45:32.019291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.163 [2024-12-06 13:45:32.019325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:50:39.163 [2024-12-06 13:45:32.019338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.041 ms 00:50:39.163 [2024-12-06 13:45:32.019348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.163 [2024-12-06 13:45:32.054766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.163 [2024-12-06 13:45:32.054801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:50:39.163 [2024-12-06 13:45:32.054813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.311 ms 00:50:39.163 [2024-12-06 13:45:32.054823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.163 [2024-12-06 13:45:32.054859] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:50:39.163 [2024-12-06 13:45:32.054876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 768 / 261120 wr_cnt: 1 state: open 00:50:39.163 [2024-12-06 13:45:32.054896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.054907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.054918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.054929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.054939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.054949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.054959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.054969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.054979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.054989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.054998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.055008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.055019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.055029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.055039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.055050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.055060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:39.163 [2024-12-06 13:45:32.055070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.055995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.056006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.056018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.056030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.056041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:50:39.164 [2024-12-06 13:45:32.056060] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:50:39.164 [2024-12-06 13:45:32.056071] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 557a187b-33b3-4c37-87be-aa26a920e4a6 00:50:39.164 [2024-12-06 13:45:32.056095] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 768 00:50:39.164 [2024-12-06 13:45:32.056106] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1728 00:50:39.164 [2024-12-06 13:45:32.056116] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 768 00:50:39.164 [2024-12-06 13:45:32.056128] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.2500 00:50:39.164 [2024-12-06 13:45:32.056138] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:50:39.164 [2024-12-06 13:45:32.056155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:50:39.164 [2024-12-06 13:45:32.056166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:50:39.165 [2024-12-06 13:45:32.056176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:50:39.165 [2024-12-06 13:45:32.056185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:50:39.165 [2024-12-06 13:45:32.056195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.165 [2024-12-06 13:45:32.056206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:50:39.165 [2024-12-06 13:45:32.056218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:50:39.165 [2024-12-06 13:45:32.056228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.165 [2024-12-06 13:45:32.077181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.165 [2024-12-06 13:45:32.077213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:50:39.165 [2024-12-06 13:45:32.077225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.918 ms 00:50:39.165 [2024-12-06 13:45:32.077242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.165 [2024-12-06 13:45:32.077904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.165 [2024-12-06 13:45:32.077926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:50:39.165 [2024-12-06 13:45:32.077938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:50:39.165 [2024-12-06 13:45:32.077949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.165 [2024-12-06 13:45:32.133925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.165 [2024-12-06 13:45:32.133969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:39.165 [2024-12-06 13:45:32.133984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.165 [2024-12-06 13:45:32.133996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.165 [2024-12-06 13:45:32.134063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.165 [2024-12-06 13:45:32.134075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:39.165 [2024-12-06 13:45:32.134089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.165 [2024-12-06 13:45:32.134099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.165 [2024-12-06 13:45:32.134198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.165 [2024-12-06 13:45:32.134213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:39.165 [2024-12-06 13:45:32.134229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.165 [2024-12-06 13:45:32.134241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.165 [2024-12-06 13:45:32.134261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.165 [2024-12-06 13:45:32.134272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:39.165 [2024-12-06 13:45:32.134283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.165 [2024-12-06 13:45:32.134294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.424 [2024-12-06 13:45:32.272214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.424 [2024-12-06 13:45:32.272295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:39.424 [2024-12-06 13:45:32.272336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.424 [2024-12-06 13:45:32.272349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.424 [2024-12-06 13:45:32.379622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.424 [2024-12-06 13:45:32.379685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:39.424 [2024-12-06 13:45:32.379702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.424 [2024-12-06 13:45:32.379714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.424 [2024-12-06 13:45:32.379835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.424 [2024-12-06 13:45:32.379849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:39.424 [2024-12-06 13:45:32.379861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.424 [2024-12-06 13:45:32.379874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.424 [2024-12-06 13:45:32.379937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.424 [2024-12-06 13:45:32.379951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:39.424 [2024-12-06 13:45:32.379962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.424 [2024-12-06 13:45:32.379974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.424 [2024-12-06 13:45:32.380116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.424 [2024-12-06 13:45:32.380132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:39.424 [2024-12-06 13:45:32.380144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.424 [2024-12-06 13:45:32.380157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.424 [2024-12-06 13:45:32.380203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.424 [2024-12-06 13:45:32.380217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:50:39.424 [2024-12-06 13:45:32.380228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.424 [2024-12-06 13:45:32.380239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.424 [2024-12-06 13:45:32.380289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.424 [2024-12-06 13:45:32.380302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:39.424 [2024-12-06 13:45:32.380313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.424 [2024-12-06 13:45:32.380324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.424 [2024-12-06 13:45:32.380386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.424 [2024-12-06 13:45:32.380412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:39.424 [2024-12-06 13:45:32.380424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.424 [2024-12-06 13:45:32.380435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.424 [2024-12-06 13:45:32.380590] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 561.401 ms, result 0 00:50:41.332 00:50:41.332 00:50:41.332 13:45:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:50:43.258 13:45:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:50:43.258 [2024-12-06 13:45:36.152864] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:50:43.258 [2024-12-06 13:45:36.153276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83235 ] 00:50:43.258 [2024-12-06 13:45:36.338494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:43.517 [2024-12-06 13:45:36.526946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:44.086 [2024-12-06 13:45:36.953791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:44.086 [2024-12-06 13:45:36.953877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:44.086 [2024-12-06 13:45:37.122102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.086 [2024-12-06 13:45:37.122330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:44.086 [2024-12-06 13:45:37.122358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:50:44.086 [2024-12-06 13:45:37.122370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.086 [2024-12-06 13:45:37.122453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.086 [2024-12-06 13:45:37.122471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:44.086 [2024-12-06 13:45:37.122484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:50:44.086 [2024-12-06 13:45:37.122495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.086 [2024-12-06 13:45:37.122519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:44.086 [2024-12-06 13:45:37.123557] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:44.086 [2024-12-06 13:45:37.123581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.086 [2024-12-06 13:45:37.123593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:44.086 [2024-12-06 13:45:37.123606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:50:44.086 [2024-12-06 13:45:37.123617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.086 [2024-12-06 13:45:37.126175] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:50:44.086 [2024-12-06 13:45:37.147302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.086 [2024-12-06 13:45:37.147344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:50:44.086 [2024-12-06 13:45:37.147360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.127 ms 00:50:44.086 [2024-12-06 13:45:37.147372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.086 [2024-12-06 13:45:37.147467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.086 [2024-12-06 13:45:37.147490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:50:44.086 [2024-12-06 13:45:37.147503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:50:44.086 [2024-12-06 13:45:37.147514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.086 [2024-12-06 13:45:37.160453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.086 [2024-12-06 13:45:37.160663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:44.086 [2024-12-06 13:45:37.160688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.843 ms 00:50:44.086 [2024-12-06 13:45:37.160709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.086 [2024-12-06 13:45:37.160809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.086 [2024-12-06 13:45:37.160823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:44.086 [2024-12-06 13:45:37.160834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:50:44.086 [2024-12-06 13:45:37.160845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.086 [2024-12-06 13:45:37.160912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.086 [2024-12-06 13:45:37.160926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:44.086 [2024-12-06 13:45:37.160939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:50:44.086 [2024-12-06 13:45:37.160950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.086 [2024-12-06 13:45:37.160984] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:44.086 [2024-12-06 13:45:37.167103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.086 [2024-12-06 13:45:37.167136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:44.086 [2024-12-06 13:45:37.167154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.127 ms 00:50:44.086 [2024-12-06 13:45:37.167163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.086 [2024-12-06 13:45:37.167201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.086 [2024-12-06 13:45:37.167212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:44.086 [2024-12-06 13:45:37.167223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:50:44.086 [2024-12-06 13:45:37.167233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.086 [2024-12-06 13:45:37.167270] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:50:44.086 [2024-12-06 13:45:37.167297] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:50:44.086 [2024-12-06 13:45:37.167334] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:50:44.086 [2024-12-06 13:45:37.167367] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:50:44.086 [2024-12-06 13:45:37.167503] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:50:44.087 [2024-12-06 13:45:37.167535] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:44.087 [2024-12-06 13:45:37.167565] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:50:44.087 [2024-12-06 13:45:37.167579] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:44.087 [2024-12-06 13:45:37.167593] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:44.087 [2024-12-06 13:45:37.167607] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:50:44.087 [2024-12-06 13:45:37.167619] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:44.087 [2024-12-06 13:45:37.167638] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:50:44.087 [2024-12-06 13:45:37.167650] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:50:44.087 [2024-12-06 13:45:37.167662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.087 [2024-12-06 13:45:37.167673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:44.087 [2024-12-06 13:45:37.167685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:50:44.087 [2024-12-06 13:45:37.167695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.087 [2024-12-06 13:45:37.167773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.087 [2024-12-06 13:45:37.167784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:44.087 [2024-12-06 13:45:37.167796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:50:44.087 [2024-12-06 13:45:37.167806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.087 [2024-12-06 13:45:37.167906] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:44.087 [2024-12-06 13:45:37.167920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:44.087 [2024-12-06 13:45:37.167932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:44.087 [2024-12-06 13:45:37.167944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:44.087 [2024-12-06 13:45:37.167956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:44.087 [2024-12-06 13:45:37.167965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:44.087 [2024-12-06 13:45:37.167975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:50:44.087 [2024-12-06 13:45:37.167985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:44.087 [2024-12-06 13:45:37.167995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:50:44.087 [2024-12-06 13:45:37.168005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:44.087 [2024-12-06 13:45:37.168015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:44.087 [2024-12-06 13:45:37.168025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:50:44.087 [2024-12-06 13:45:37.168034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:44.087 [2024-12-06 13:45:37.168056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:44.087 [2024-12-06 13:45:37.168068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:50:44.087 [2024-12-06 13:45:37.168078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:44.087 [2024-12-06 13:45:37.168089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:44.087 [2024-12-06 13:45:37.168099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:50:44.087 [2024-12-06 13:45:37.168109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:44.087 [2024-12-06 13:45:37.168119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:44.087 [2024-12-06 13:45:37.168129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:50:44.087 [2024-12-06 13:45:37.168139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:44.087 [2024-12-06 13:45:37.168149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:44.087 [2024-12-06 13:45:37.168159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:50:44.087 [2024-12-06 13:45:37.168169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:44.087 [2024-12-06 13:45:37.168178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:44.087 [2024-12-06 13:45:37.168188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:50:44.087 [2024-12-06 13:45:37.168198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:44.087 [2024-12-06 13:45:37.168207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:44.087 [2024-12-06 13:45:37.168217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:50:44.087 [2024-12-06 13:45:37.168226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:44.087 [2024-12-06 13:45:37.168235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:44.087 [2024-12-06 13:45:37.168244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:50:44.087 [2024-12-06 13:45:37.168253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:44.087 [2024-12-06 13:45:37.168263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:44.087 [2024-12-06 13:45:37.168272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:50:44.087 [2024-12-06 13:45:37.168281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:44.087 [2024-12-06 13:45:37.168291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:50:44.087 [2024-12-06 13:45:37.168300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:50:44.087 [2024-12-06 13:45:37.168309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:44.087 [2024-12-06 13:45:37.168319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:50:44.087 [2024-12-06 13:45:37.168328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:50:44.087 [2024-12-06 13:45:37.168337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:44.087 [2024-12-06 13:45:37.168346] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:44.087 [2024-12-06 13:45:37.168357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:44.087 [2024-12-06 13:45:37.168368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:44.087 [2024-12-06 13:45:37.168380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:44.087 [2024-12-06 13:45:37.168391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:44.087 [2024-12-06 13:45:37.168413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:44.087 [2024-12-06 13:45:37.168423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:44.087 [2024-12-06 13:45:37.168433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:44.087 [2024-12-06 13:45:37.168443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:44.087 [2024-12-06 13:45:37.168453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:44.087 [2024-12-06 13:45:37.168464] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:44.087 [2024-12-06 13:45:37.168478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:44.087 [2024-12-06 13:45:37.168495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:50:44.087 [2024-12-06 13:45:37.168507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:50:44.087 [2024-12-06 13:45:37.168518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:50:44.087 [2024-12-06 13:45:37.168529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:50:44.087 [2024-12-06 13:45:37.168539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:50:44.087 [2024-12-06 13:45:37.168550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:50:44.087 [2024-12-06 13:45:37.168561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:50:44.087 [2024-12-06 13:45:37.168572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:50:44.087 [2024-12-06 13:45:37.168583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:50:44.087 [2024-12-06 13:45:37.168594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:50:44.087 [2024-12-06 13:45:37.168604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:50:44.087 [2024-12-06 13:45:37.168616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:50:44.087 [2024-12-06 13:45:37.168626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:50:44.087 [2024-12-06 13:45:37.168637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:50:44.087 [2024-12-06 13:45:37.168647] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:44.087 [2024-12-06 13:45:37.168659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:44.087 [2024-12-06 13:45:37.168670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:44.087 [2024-12-06 13:45:37.168681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:44.087 [2024-12-06 13:45:37.168691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:44.087 [2024-12-06 13:45:37.168702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:44.087 [2024-12-06 13:45:37.168713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.087 [2024-12-06 13:45:37.168725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:44.087 [2024-12-06 13:45:37.168735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:50:44.087 [2024-12-06 13:45:37.168751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.219641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.219691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:44.347 [2024-12-06 13:45:37.219713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.833 ms 00:50:44.347 [2024-12-06 13:45:37.219725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.219818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.219831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:44.347 [2024-12-06 13:45:37.219842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:50:44.347 [2024-12-06 13:45:37.219857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.281191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.281384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:44.347 [2024-12-06 13:45:37.281422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.234 ms 00:50:44.347 [2024-12-06 13:45:37.281435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.281491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.281504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:44.347 [2024-12-06 13:45:37.281516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:50:44.347 [2024-12-06 13:45:37.281527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.282382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.282421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:44.347 [2024-12-06 13:45:37.282434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:50:44.347 [2024-12-06 13:45:37.282445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.282583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.282602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:44.347 [2024-12-06 13:45:37.282614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:50:44.347 [2024-12-06 13:45:37.282624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.307460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.307539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:44.347 [2024-12-06 13:45:37.307561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.812 ms 00:50:44.347 [2024-12-06 13:45:37.307579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.330993] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:50:44.347 [2024-12-06 13:45:37.331035] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:50:44.347 [2024-12-06 13:45:37.331054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.331069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:50:44.347 [2024-12-06 13:45:37.331083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.317 ms 00:50:44.347 [2024-12-06 13:45:37.331094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.361732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.361775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:50:44.347 [2024-12-06 13:45:37.361790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.586 ms 00:50:44.347 [2024-12-06 13:45:37.361802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.380345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.380385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:50:44.347 [2024-12-06 13:45:37.380406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.479 ms 00:50:44.347 [2024-12-06 13:45:37.380417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.398472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.398507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:50:44.347 [2024-12-06 13:45:37.398520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.012 ms 00:50:44.347 [2024-12-06 13:45:37.398530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.347 [2024-12-06 13:45:37.399431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.347 [2024-12-06 13:45:37.399465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:44.347 [2024-12-06 13:45:37.399488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:50:44.347 [2024-12-06 13:45:37.399500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.606 [2024-12-06 13:45:37.506164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.606 [2024-12-06 13:45:37.506255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:50:44.606 [2024-12-06 13:45:37.506290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.637 ms 00:50:44.606 [2024-12-06 13:45:37.506303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.606 [2024-12-06 13:45:37.517749] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:50:44.606 [2024-12-06 13:45:37.522312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.606 [2024-12-06 13:45:37.522347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:44.606 [2024-12-06 13:45:37.522379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.934 ms 00:50:44.606 [2024-12-06 13:45:37.522391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.606 [2024-12-06 13:45:37.522543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.606 [2024-12-06 13:45:37.522558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:50:44.606 [2024-12-06 13:45:37.522575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:50:44.606 [2024-12-06 13:45:37.522586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.606 [2024-12-06 13:45:37.524077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.606 [2024-12-06 13:45:37.524109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:44.606 [2024-12-06 13:45:37.524122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.444 ms 00:50:44.606 [2024-12-06 13:45:37.524133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.606 [2024-12-06 13:45:37.524170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.606 [2024-12-06 13:45:37.524183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:44.606 [2024-12-06 13:45:37.524195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:50:44.606 [2024-12-06 13:45:37.524211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.606 [2024-12-06 13:45:37.524254] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:50:44.606 [2024-12-06 13:45:37.524267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.606 [2024-12-06 13:45:37.524279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:50:44.606 [2024-12-06 13:45:37.524290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:50:44.606 [2024-12-06 13:45:37.524301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.606 [2024-12-06 13:45:37.560786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.606 [2024-12-06 13:45:37.560826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:44.606 [2024-12-06 13:45:37.560848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.464 ms 00:50:44.606 [2024-12-06 13:45:37.560859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.606 [2024-12-06 13:45:37.560936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:44.606 [2024-12-06 13:45:37.560948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:44.606 [2024-12-06 13:45:37.560960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:50:44.606 [2024-12-06 13:45:37.560971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:44.606 [2024-12-06 13:45:37.564258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 441.192 ms, result 0 00:50:45.983  [2024-12-06T13:45:40.019Z] Copying: 1304/1048576 [kB] (1304 kBps) [2024-12-06T13:45:40.954Z] Copying: 2556/1048576 [kB] (1252 kBps) [2024-12-06T13:45:41.888Z] Copying: 11208/1048576 [kB] (8652 kBps) [2024-12-06T13:45:42.820Z] Copying: 47/1024 [MB] (36 MBps) [2024-12-06T13:45:44.212Z] Copying: 83/1024 [MB] (36 MBps) [2024-12-06T13:45:45.146Z] Copying: 120/1024 [MB] (37 MBps) [2024-12-06T13:45:46.080Z] Copying: 157/1024 [MB] (37 MBps) [2024-12-06T13:45:47.015Z] Copying: 195/1024 [MB] (37 MBps) [2024-12-06T13:45:47.951Z] Copying: 231/1024 [MB] (36 MBps) [2024-12-06T13:45:48.888Z] Copying: 268/1024 [MB] (36 MBps) [2024-12-06T13:45:49.827Z] Copying: 305/1024 [MB] (37 MBps) [2024-12-06T13:45:51.232Z] Copying: 342/1024 [MB] (36 MBps) [2024-12-06T13:45:51.799Z] Copying: 376/1024 [MB] (34 MBps) [2024-12-06T13:45:53.176Z] Copying: 413/1024 [MB] (36 MBps) [2024-12-06T13:45:54.109Z] Copying: 450/1024 [MB] (36 MBps) [2024-12-06T13:45:55.041Z] Copying: 486/1024 [MB] (36 MBps) [2024-12-06T13:45:55.976Z] Copying: 523/1024 [MB] (36 MBps) [2024-12-06T13:45:56.923Z] Copying: 558/1024 [MB] (35 MBps) [2024-12-06T13:45:57.859Z] Copying: 593/1024 [MB] (34 MBps) [2024-12-06T13:45:58.795Z] Copying: 631/1024 [MB] (37 MBps) [2024-12-06T13:46:00.170Z] Copying: 669/1024 [MB] (38 MBps) [2024-12-06T13:46:01.105Z] Copying: 707/1024 [MB] (38 MBps) [2024-12-06T13:46:02.038Z] Copying: 744/1024 [MB] (37 MBps) [2024-12-06T13:46:02.973Z] Copying: 779/1024 [MB] (35 MBps) [2024-12-06T13:46:03.906Z] Copying: 813/1024 [MB] (33 MBps) [2024-12-06T13:46:04.841Z] Copying: 849/1024 [MB] (35 MBps) [2024-12-06T13:46:05.811Z] Copying: 884/1024 [MB] (35 MBps) [2024-12-06T13:46:07.187Z] Copying: 919/1024 [MB] (35 MBps) [2024-12-06T13:46:08.120Z] Copying: 955/1024 [MB] (35 MBps) [2024-12-06T13:46:08.685Z] Copying: 991/1024 [MB] (36 MBps) [2024-12-06T13:46:09.252Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-12-06 13:46:09.224825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.152 [2024-12-06 13:46:09.224942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:51:16.152 [2024-12-06 13:46:09.224985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:51:16.152 [2024-12-06 13:46:09.225010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.152 [2024-12-06 13:46:09.225072] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:51:16.152 [2024-12-06 13:46:09.232721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.152 [2024-12-06 13:46:09.232781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:51:16.152 [2024-12-06 13:46:09.232816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.605 ms 00:51:16.152 [2024-12-06 13:46:09.232846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.152 [2024-12-06 13:46:09.233269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.152 [2024-12-06 13:46:09.233321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:51:16.152 [2024-12-06 13:46:09.233356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:51:16.152 [2024-12-06 13:46:09.233387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.409 [2024-12-06 13:46:09.254960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.409 [2024-12-06 13:46:09.255022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:51:16.409 [2024-12-06 13:46:09.255046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.502 ms 00:51:16.409 [2024-12-06 13:46:09.255065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.410 [2024-12-06 13:46:09.264076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.410 [2024-12-06 13:46:09.264139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:51:16.410 [2024-12-06 13:46:09.264160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.966 ms 00:51:16.410 [2024-12-06 13:46:09.264177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.410 [2024-12-06 13:46:09.324616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.410 [2024-12-06 13:46:09.324670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:51:16.410 [2024-12-06 13:46:09.324693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.340 ms 00:51:16.410 [2024-12-06 13:46:09.324709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.410 [2024-12-06 13:46:09.356960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.410 [2024-12-06 13:46:09.357017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:51:16.410 [2024-12-06 13:46:09.357040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.191 ms 00:51:16.410 [2024-12-06 13:46:09.357058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.410 [2024-12-06 13:46:09.359320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.410 [2024-12-06 13:46:09.359379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:51:16.410 [2024-12-06 13:46:09.359428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:51:16.410 [2024-12-06 13:46:09.359446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.410 [2024-12-06 13:46:09.418117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.410 [2024-12-06 13:46:09.418170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:51:16.410 [2024-12-06 13:46:09.418192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.628 ms 00:51:16.410 [2024-12-06 13:46:09.418208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.410 [2024-12-06 13:46:09.476446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.410 [2024-12-06 13:46:09.476498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:51:16.410 [2024-12-06 13:46:09.476519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.182 ms 00:51:16.410 [2024-12-06 13:46:09.476535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.669 [2024-12-06 13:46:09.528238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.669 [2024-12-06 13:46:09.528278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:51:16.669 [2024-12-06 13:46:09.528292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.649 ms 00:51:16.669 [2024-12-06 13:46:09.528303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.669 [2024-12-06 13:46:09.564510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.669 [2024-12-06 13:46:09.564548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:51:16.669 [2024-12-06 13:46:09.564574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.093 ms 00:51:16.669 [2024-12-06 13:46:09.564600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.669 [2024-12-06 13:46:09.564639] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:51:16.669 [2024-12-06 13:46:09.564658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:51:16.669 [2024-12-06 13:46:09.564672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:51:16.669 [2024-12-06 13:46:09.564683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.564995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:51:16.669 [2024-12-06 13:46:09.565288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:51:16.670 [2024-12-06 13:46:09.565839] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:51:16.670 [2024-12-06 13:46:09.565850] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 557a187b-33b3-4c37-87be-aa26a920e4a6 00:51:16.670 [2024-12-06 13:46:09.565861] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:51:16.670 [2024-12-06 13:46:09.565877] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263872 00:51:16.670 [2024-12-06 13:46:09.565887] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261888 00:51:16.670 [2024-12-06 13:46:09.565898] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:51:16.670 [2024-12-06 13:46:09.565908] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:51:16.670 [2024-12-06 13:46:09.565930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:51:16.670 [2024-12-06 13:46:09.565940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:51:16.670 [2024-12-06 13:46:09.565950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:51:16.670 [2024-12-06 13:46:09.565959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:51:16.670 [2024-12-06 13:46:09.565970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.670 [2024-12-06 13:46:09.565980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:51:16.670 [2024-12-06 13:46:09.565991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:51:16.670 [2024-12-06 13:46:09.566001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.670 [2024-12-06 13:46:09.587466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.670 [2024-12-06 13:46:09.587501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:51:16.670 [2024-12-06 13:46:09.587514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.420 ms 00:51:16.670 [2024-12-06 13:46:09.587540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.670 [2024-12-06 13:46:09.588204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.670 [2024-12-06 13:46:09.588225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:51:16.670 [2024-12-06 13:46:09.588238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:51:16.670 [2024-12-06 13:46:09.588255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.670 [2024-12-06 13:46:09.644431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.670 [2024-12-06 13:46:09.644470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:16.670 [2024-12-06 13:46:09.644485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.670 [2024-12-06 13:46:09.644496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.670 [2024-12-06 13:46:09.644565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.670 [2024-12-06 13:46:09.644578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:16.670 [2024-12-06 13:46:09.644590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.670 [2024-12-06 13:46:09.644607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.670 [2024-12-06 13:46:09.644715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.670 [2024-12-06 13:46:09.644730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:16.670 [2024-12-06 13:46:09.644742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.670 [2024-12-06 13:46:09.644753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.670 [2024-12-06 13:46:09.644773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.670 [2024-12-06 13:46:09.644784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:16.670 [2024-12-06 13:46:09.644795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.670 [2024-12-06 13:46:09.644806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.929 [2024-12-06 13:46:09.782319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.929 [2024-12-06 13:46:09.782414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:16.929 [2024-12-06 13:46:09.782433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.929 [2024-12-06 13:46:09.782446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.929 [2024-12-06 13:46:09.886979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.929 [2024-12-06 13:46:09.887057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:16.929 [2024-12-06 13:46:09.887074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.929 [2024-12-06 13:46:09.887102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.929 [2024-12-06 13:46:09.887237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.929 [2024-12-06 13:46:09.887249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:16.929 [2024-12-06 13:46:09.887261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.929 [2024-12-06 13:46:09.887272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.929 [2024-12-06 13:46:09.887322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.929 [2024-12-06 13:46:09.887334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:16.929 [2024-12-06 13:46:09.887345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.929 [2024-12-06 13:46:09.887356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.929 [2024-12-06 13:46:09.887527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.929 [2024-12-06 13:46:09.887548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:16.929 [2024-12-06 13:46:09.887560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.929 [2024-12-06 13:46:09.887571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.929 [2024-12-06 13:46:09.887612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.929 [2024-12-06 13:46:09.887625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:51:16.929 [2024-12-06 13:46:09.887637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.929 [2024-12-06 13:46:09.887648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.929 [2024-12-06 13:46:09.887697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.929 [2024-12-06 13:46:09.887716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:16.929 [2024-12-06 13:46:09.887727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.929 [2024-12-06 13:46:09.887738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.929 [2024-12-06 13:46:09.887791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:16.929 [2024-12-06 13:46:09.887814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:16.929 [2024-12-06 13:46:09.887826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:16.929 [2024-12-06 13:46:09.887837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.929 [2024-12-06 13:46:09.887995] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 663.130 ms, result 0 00:51:18.304 00:51:18.304 00:51:18.304 13:46:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:51:20.202 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:51:20.202 13:46:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:51:20.202 [2024-12-06 13:46:12.936436] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:51:20.202 [2024-12-06 13:46:12.936601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83600 ] 00:51:20.202 [2024-12-06 13:46:13.123760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:20.202 [2024-12-06 13:46:13.293060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:20.765 [2024-12-06 13:46:13.730707] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:51:20.765 [2024-12-06 13:46:13.730814] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:51:21.024 [2024-12-06 13:46:13.898867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.898929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:51:21.024 [2024-12-06 13:46:13.898949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:51:21.024 [2024-12-06 13:46:13.898960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.024 [2024-12-06 13:46:13.899017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.899034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:21.024 [2024-12-06 13:46:13.899045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:51:21.024 [2024-12-06 13:46:13.899055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.024 [2024-12-06 13:46:13.899078] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:51:21.024 [2024-12-06 13:46:13.900127] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:51:21.024 [2024-12-06 13:46:13.900169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.900181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:21.024 [2024-12-06 13:46:13.900192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:51:21.024 [2024-12-06 13:46:13.900203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.024 [2024-12-06 13:46:13.902838] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:51:21.024 [2024-12-06 13:46:13.923206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.923249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:51:21.024 [2024-12-06 13:46:13.923265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.369 ms 00:51:21.024 [2024-12-06 13:46:13.923277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.024 [2024-12-06 13:46:13.923355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.923368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:51:21.024 [2024-12-06 13:46:13.923381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:51:21.024 [2024-12-06 13:46:13.923391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.024 [2024-12-06 13:46:13.936281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.936313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:21.024 [2024-12-06 13:46:13.936327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.770 ms 00:51:21.024 [2024-12-06 13:46:13.936359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.024 [2024-12-06 13:46:13.936463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.936478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:21.024 [2024-12-06 13:46:13.936490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:51:21.024 [2024-12-06 13:46:13.936501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.024 [2024-12-06 13:46:13.936566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.936579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:51:21.024 [2024-12-06 13:46:13.936590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:51:21.024 [2024-12-06 13:46:13.936601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.024 [2024-12-06 13:46:13.936637] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:51:21.024 [2024-12-06 13:46:13.942580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.942616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:21.024 [2024-12-06 13:46:13.942649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.953 ms 00:51:21.024 [2024-12-06 13:46:13.942661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.024 [2024-12-06 13:46:13.942698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.942710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:51:21.024 [2024-12-06 13:46:13.942722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:51:21.024 [2024-12-06 13:46:13.942732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.024 [2024-12-06 13:46:13.942772] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:51:21.024 [2024-12-06 13:46:13.942801] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:51:21.024 [2024-12-06 13:46:13.942840] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:51:21.024 [2024-12-06 13:46:13.942865] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:51:21.024 [2024-12-06 13:46:13.942962] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:51:21.024 [2024-12-06 13:46:13.942977] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:51:21.024 [2024-12-06 13:46:13.942991] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:51:21.024 [2024-12-06 13:46:13.943006] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:51:21.024 [2024-12-06 13:46:13.943020] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:51:21.024 [2024-12-06 13:46:13.943032] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:51:21.024 [2024-12-06 13:46:13.943043] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:51:21.024 [2024-12-06 13:46:13.943058] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:51:21.024 [2024-12-06 13:46:13.943069] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:51:21.024 [2024-12-06 13:46:13.943080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.943091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:51:21.024 [2024-12-06 13:46:13.943102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:51:21.024 [2024-12-06 13:46:13.943113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.024 [2024-12-06 13:46:13.943187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.024 [2024-12-06 13:46:13.943198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:51:21.025 [2024-12-06 13:46:13.943209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:51:21.025 [2024-12-06 13:46:13.943219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.025 [2024-12-06 13:46:13.943317] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:51:21.025 [2024-12-06 13:46:13.943338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:51:21.025 [2024-12-06 13:46:13.943350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:51:21.025 [2024-12-06 13:46:13.943361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:51:21.025 [2024-12-06 13:46:13.943383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:51:21.025 [2024-12-06 13:46:13.943419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:51:21.025 [2024-12-06 13:46:13.943429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:51:21.025 [2024-12-06 13:46:13.943449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:51:21.025 [2024-12-06 13:46:13.943466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:51:21.025 [2024-12-06 13:46:13.943478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:51:21.025 [2024-12-06 13:46:13.943502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:51:21.025 [2024-12-06 13:46:13.943512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:51:21.025 [2024-12-06 13:46:13.943522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:51:21.025 [2024-12-06 13:46:13.943541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:51:21.025 [2024-12-06 13:46:13.943551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:51:21.025 [2024-12-06 13:46:13.943571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:21.025 [2024-12-06 13:46:13.943591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:51:21.025 [2024-12-06 13:46:13.943601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:21.025 [2024-12-06 13:46:13.943620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:51:21.025 [2024-12-06 13:46:13.943630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:21.025 [2024-12-06 13:46:13.943648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:51:21.025 [2024-12-06 13:46:13.943658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:21.025 [2024-12-06 13:46:13.943678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:51:21.025 [2024-12-06 13:46:13.943687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:51:21.025 [2024-12-06 13:46:13.943707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:51:21.025 [2024-12-06 13:46:13.943716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:51:21.025 [2024-12-06 13:46:13.943725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:51:21.025 [2024-12-06 13:46:13.943735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:51:21.025 [2024-12-06 13:46:13.943744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:51:21.025 [2024-12-06 13:46:13.943754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:51:21.025 [2024-12-06 13:46:13.943773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:51:21.025 [2024-12-06 13:46:13.943782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943792] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:51:21.025 [2024-12-06 13:46:13.943803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:51:21.025 [2024-12-06 13:46:13.943813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:51:21.025 [2024-12-06 13:46:13.943824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:21.025 [2024-12-06 13:46:13.943834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:51:21.025 [2024-12-06 13:46:13.943844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:51:21.025 [2024-12-06 13:46:13.943854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:51:21.025 [2024-12-06 13:46:13.943863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:51:21.025 [2024-12-06 13:46:13.943873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:51:21.025 [2024-12-06 13:46:13.943883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:51:21.025 [2024-12-06 13:46:13.943894] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:51:21.025 [2024-12-06 13:46:13.943908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:21.025 [2024-12-06 13:46:13.943924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:51:21.025 [2024-12-06 13:46:13.943936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:51:21.025 [2024-12-06 13:46:13.943946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:51:21.025 [2024-12-06 13:46:13.943957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:51:21.025 [2024-12-06 13:46:13.943968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:51:21.025 [2024-12-06 13:46:13.943979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:51:21.025 [2024-12-06 13:46:13.943990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:51:21.025 [2024-12-06 13:46:13.944001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:51:21.025 [2024-12-06 13:46:13.944011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:51:21.025 [2024-12-06 13:46:13.944022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:51:21.025 [2024-12-06 13:46:13.944033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:51:21.025 [2024-12-06 13:46:13.944044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:51:21.025 [2024-12-06 13:46:13.944055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:51:21.025 [2024-12-06 13:46:13.944066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:51:21.025 [2024-12-06 13:46:13.944077] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:51:21.025 [2024-12-06 13:46:13.944088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:21.025 [2024-12-06 13:46:13.944100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:51:21.025 [2024-12-06 13:46:13.944111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:51:21.025 [2024-12-06 13:46:13.944121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:51:21.025 [2024-12-06 13:46:13.944132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:51:21.025 [2024-12-06 13:46:13.944143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.025 [2024-12-06 13:46:13.944160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:51:21.025 [2024-12-06 13:46:13.944171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:51:21.025 [2024-12-06 13:46:13.944181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.025 [2024-12-06 13:46:13.993872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.025 [2024-12-06 13:46:13.993919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:21.025 [2024-12-06 13:46:13.993936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.633 ms 00:51:21.025 [2024-12-06 13:46:13.993953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.025 [2024-12-06 13:46:13.994050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.025 [2024-12-06 13:46:13.994063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:51:21.025 [2024-12-06 13:46:13.994074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:51:21.025 [2024-12-06 13:46:13.994086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.025 [2024-12-06 13:46:14.061274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.025 [2024-12-06 13:46:14.061338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:21.025 [2024-12-06 13:46:14.061354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.081 ms 00:51:21.025 [2024-12-06 13:46:14.061366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.025 [2024-12-06 13:46:14.061427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.025 [2024-12-06 13:46:14.061446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:21.025 [2024-12-06 13:46:14.061458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:51:21.025 [2024-12-06 13:46:14.061468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.025 [2024-12-06 13:46:14.062326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.025 [2024-12-06 13:46:14.062349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:21.025 [2024-12-06 13:46:14.062361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:51:21.025 [2024-12-06 13:46:14.062372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.026 [2024-12-06 13:46:14.062530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.026 [2024-12-06 13:46:14.062545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:21.026 [2024-12-06 13:46:14.062561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:51:21.026 [2024-12-06 13:46:14.062572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.026 [2024-12-06 13:46:14.085989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.026 [2024-12-06 13:46:14.086036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:21.026 [2024-12-06 13:46:14.086067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.391 ms 00:51:21.026 [2024-12-06 13:46:14.086078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.026 [2024-12-06 13:46:14.106787] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:51:21.026 [2024-12-06 13:46:14.106824] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:51:21.026 [2024-12-06 13:46:14.106856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.026 [2024-12-06 13:46:14.106868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:51:21.026 [2024-12-06 13:46:14.106882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.638 ms 00:51:21.026 [2024-12-06 13:46:14.106892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.138212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.138261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:51:21.284 [2024-12-06 13:46:14.138277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.273 ms 00:51:21.284 [2024-12-06 13:46:14.138306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.157402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.157445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:51:21.284 [2024-12-06 13:46:14.157460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.029 ms 00:51:21.284 [2024-12-06 13:46:14.157470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.175297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.175330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:51:21.284 [2024-12-06 13:46:14.175343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.770 ms 00:51:21.284 [2024-12-06 13:46:14.175352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.176256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.176290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:51:21.284 [2024-12-06 13:46:14.176307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:51:21.284 [2024-12-06 13:46:14.176318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.274082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.274157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:51:21.284 [2024-12-06 13:46:14.274199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.736 ms 00:51:21.284 [2024-12-06 13:46:14.274210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.285269] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:51:21.284 [2024-12-06 13:46:14.289945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.289977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:51:21.284 [2024-12-06 13:46:14.289994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.670 ms 00:51:21.284 [2024-12-06 13:46:14.290005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.290114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.290128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:51:21.284 [2024-12-06 13:46:14.290145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:51:21.284 [2024-12-06 13:46:14.290156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.291612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.291640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:51:21.284 [2024-12-06 13:46:14.291652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:51:21.284 [2024-12-06 13:46:14.291663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.291689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.291702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:51:21.284 [2024-12-06 13:46:14.291713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:51:21.284 [2024-12-06 13:46:14.291723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.291772] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:51:21.284 [2024-12-06 13:46:14.291786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.291797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:51:21.284 [2024-12-06 13:46:14.291808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:51:21.284 [2024-12-06 13:46:14.291818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.328552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.328592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:51:21.284 [2024-12-06 13:46:14.328613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.711 ms 00:51:21.284 [2024-12-06 13:46:14.328624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.328702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:21.284 [2024-12-06 13:46:14.328715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:51:21.284 [2024-12-06 13:46:14.328727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:51:21.284 [2024-12-06 13:46:14.328738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:21.284 [2024-12-06 13:46:14.330251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 430.853 ms, result 0 00:51:22.659  [2024-12-06T13:46:16.705Z] Copying: 28/1024 [MB] (28 MBps) [2024-12-06T13:46:17.640Z] Copying: 57/1024 [MB] (29 MBps) [2024-12-06T13:46:18.597Z] Copying: 86/1024 [MB] (29 MBps) [2024-12-06T13:46:19.972Z] Copying: 114/1024 [MB] (28 MBps) [2024-12-06T13:46:20.950Z] Copying: 143/1024 [MB] (29 MBps) [2024-12-06T13:46:21.884Z] Copying: 173/1024 [MB] (29 MBps) [2024-12-06T13:46:22.818Z] Copying: 202/1024 [MB] (29 MBps) [2024-12-06T13:46:23.753Z] Copying: 232/1024 [MB] (29 MBps) [2024-12-06T13:46:24.685Z] Copying: 262/1024 [MB] (29 MBps) [2024-12-06T13:46:25.618Z] Copying: 291/1024 [MB] (29 MBps) [2024-12-06T13:46:26.991Z] Copying: 321/1024 [MB] (29 MBps) [2024-12-06T13:46:27.926Z] Copying: 350/1024 [MB] (29 MBps) [2024-12-06T13:46:28.860Z] Copying: 380/1024 [MB] (29 MBps) [2024-12-06T13:46:29.796Z] Copying: 410/1024 [MB] (29 MBps) [2024-12-06T13:46:30.729Z] Copying: 439/1024 [MB] (29 MBps) [2024-12-06T13:46:31.661Z] Copying: 469/1024 [MB] (30 MBps) [2024-12-06T13:46:32.596Z] Copying: 499/1024 [MB] (29 MBps) [2024-12-06T13:46:33.969Z] Copying: 529/1024 [MB] (29 MBps) [2024-12-06T13:46:34.903Z] Copying: 559/1024 [MB] (29 MBps) [2024-12-06T13:46:35.871Z] Copying: 588/1024 [MB] (29 MBps) [2024-12-06T13:46:36.809Z] Copying: 618/1024 [MB] (29 MBps) [2024-12-06T13:46:37.742Z] Copying: 648/1024 [MB] (29 MBps) [2024-12-06T13:46:38.677Z] Copying: 677/1024 [MB] (29 MBps) [2024-12-06T13:46:39.612Z] Copying: 706/1024 [MB] (29 MBps) [2024-12-06T13:46:40.988Z] Copying: 736/1024 [MB] (29 MBps) [2024-12-06T13:46:41.926Z] Copying: 765/1024 [MB] (29 MBps) [2024-12-06T13:46:42.863Z] Copying: 795/1024 [MB] (29 MBps) [2024-12-06T13:46:43.800Z] Copying: 825/1024 [MB] (29 MBps) [2024-12-06T13:46:44.736Z] Copying: 855/1024 [MB] (30 MBps) [2024-12-06T13:46:45.672Z] Copying: 885/1024 [MB] (29 MBps) [2024-12-06T13:46:46.609Z] Copying: 913/1024 [MB] (28 MBps) [2024-12-06T13:46:47.987Z] Copying: 944/1024 [MB] (30 MBps) [2024-12-06T13:46:48.924Z] Copying: 974/1024 [MB] (30 MBps) [2024-12-06T13:46:49.184Z] Copying: 1005/1024 [MB] (30 MBps) [2024-12-06T13:46:49.444Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-12-06 13:46:49.397999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.344 [2024-12-06 13:46:49.398089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:51:56.344 [2024-12-06 13:46:49.398109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:51:56.344 [2024-12-06 13:46:49.398122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.344 [2024-12-06 13:46:49.398150] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:51:56.344 [2024-12-06 13:46:49.403425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.344 [2024-12-06 13:46:49.403491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:51:56.344 [2024-12-06 13:46:49.403505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.254 ms 00:51:56.344 [2024-12-06 13:46:49.403517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.344 [2024-12-06 13:46:49.403759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.344 [2024-12-06 13:46:49.403773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:51:56.344 [2024-12-06 13:46:49.403786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:51:56.344 [2024-12-06 13:46:49.403797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.344 [2024-12-06 13:46:49.406831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.344 [2024-12-06 13:46:49.406857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:51:56.344 [2024-12-06 13:46:49.406871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.018 ms 00:51:56.344 [2024-12-06 13:46:49.406888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.344 [2024-12-06 13:46:49.412349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.344 [2024-12-06 13:46:49.412384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:51:56.344 [2024-12-06 13:46:49.412405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.439 ms 00:51:56.344 [2024-12-06 13:46:49.412417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.606 [2024-12-06 13:46:49.453941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.606 [2024-12-06 13:46:49.453986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:51:56.606 [2024-12-06 13:46:49.454002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.438 ms 00:51:56.606 [2024-12-06 13:46:49.454013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.606 [2024-12-06 13:46:49.476501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.606 [2024-12-06 13:46:49.476543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:51:56.606 [2024-12-06 13:46:49.476575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.442 ms 00:51:56.606 [2024-12-06 13:46:49.476588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.606 [2024-12-06 13:46:49.478567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.606 [2024-12-06 13:46:49.478603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:51:56.606 [2024-12-06 13:46:49.478633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:51:56.606 [2024-12-06 13:46:49.478645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.606 [2024-12-06 13:46:49.515080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.606 [2024-12-06 13:46:49.515116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:51:56.606 [2024-12-06 13:46:49.515146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.416 ms 00:51:56.606 [2024-12-06 13:46:49.515156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.606 [2024-12-06 13:46:49.550510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.606 [2024-12-06 13:46:49.550545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:51:56.606 [2024-12-06 13:46:49.550574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.314 ms 00:51:56.606 [2024-12-06 13:46:49.550585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.606 [2024-12-06 13:46:49.585336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.606 [2024-12-06 13:46:49.585374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:51:56.606 [2024-12-06 13:46:49.585388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.712 ms 00:51:56.606 [2024-12-06 13:46:49.585422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.606 [2024-12-06 13:46:49.619900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.606 [2024-12-06 13:46:49.619934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:51:56.606 [2024-12-06 13:46:49.619963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.393 ms 00:51:56.606 [2024-12-06 13:46:49.619974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.606 [2024-12-06 13:46:49.620011] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:51:56.606 [2024-12-06 13:46:49.620035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:51:56.606 [2024-12-06 13:46:49.620054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:51:56.606 [2024-12-06 13:46:49.620066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:51:56.606 [2024-12-06 13:46:49.620503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.620992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:51:56.607 [2024-12-06 13:46:49.621174] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:51:56.607 [2024-12-06 13:46:49.621184] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 557a187b-33b3-4c37-87be-aa26a920e4a6 00:51:56.607 [2024-12-06 13:46:49.621195] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:51:56.607 [2024-12-06 13:46:49.621206] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:51:56.607 [2024-12-06 13:46:49.621216] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:51:56.607 [2024-12-06 13:46:49.621227] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:51:56.607 [2024-12-06 13:46:49.621249] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:51:56.607 [2024-12-06 13:46:49.621260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:51:56.607 [2024-12-06 13:46:49.621270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:51:56.607 [2024-12-06 13:46:49.621280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:51:56.607 [2024-12-06 13:46:49.621289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:51:56.607 [2024-12-06 13:46:49.621299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.607 [2024-12-06 13:46:49.621311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:51:56.607 [2024-12-06 13:46:49.621323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:51:56.607 [2024-12-06 13:46:49.621337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.607 [2024-12-06 13:46:49.642275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.607 [2024-12-06 13:46:49.642308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:51:56.607 [2024-12-06 13:46:49.642322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.901 ms 00:51:56.607 [2024-12-06 13:46:49.642333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.607 [2024-12-06 13:46:49.643012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:56.607 [2024-12-06 13:46:49.643041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:51:56.607 [2024-12-06 13:46:49.643053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:51:56.607 [2024-12-06 13:46:49.643064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.607 [2024-12-06 13:46:49.696943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.607 [2024-12-06 13:46:49.696980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:56.607 [2024-12-06 13:46:49.697010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.607 [2024-12-06 13:46:49.697021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.607 [2024-12-06 13:46:49.697086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.607 [2024-12-06 13:46:49.697103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:56.607 [2024-12-06 13:46:49.697115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.607 [2024-12-06 13:46:49.697126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.607 [2024-12-06 13:46:49.697193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.608 [2024-12-06 13:46:49.697207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:56.608 [2024-12-06 13:46:49.697218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.608 [2024-12-06 13:46:49.697228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.608 [2024-12-06 13:46:49.697263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.608 [2024-12-06 13:46:49.697274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:56.608 [2024-12-06 13:46:49.697291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.608 [2024-12-06 13:46:49.697302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.867 [2024-12-06 13:46:49.831256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.867 [2024-12-06 13:46:49.831338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:56.867 [2024-12-06 13:46:49.831355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.867 [2024-12-06 13:46:49.831383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.867 [2024-12-06 13:46:49.935069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.867 [2024-12-06 13:46:49.935153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:56.867 [2024-12-06 13:46:49.935170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.867 [2024-12-06 13:46:49.935197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.867 [2024-12-06 13:46:49.935329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.867 [2024-12-06 13:46:49.935342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:56.867 [2024-12-06 13:46:49.935353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.867 [2024-12-06 13:46:49.935364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.867 [2024-12-06 13:46:49.935431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.867 [2024-12-06 13:46:49.935452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:56.867 [2024-12-06 13:46:49.935463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.867 [2024-12-06 13:46:49.935494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.867 [2024-12-06 13:46:49.935634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.867 [2024-12-06 13:46:49.935649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:56.867 [2024-12-06 13:46:49.935661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.867 [2024-12-06 13:46:49.935673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.867 [2024-12-06 13:46:49.935721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.867 [2024-12-06 13:46:49.935734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:51:56.867 [2024-12-06 13:46:49.935745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.867 [2024-12-06 13:46:49.935762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.867 [2024-12-06 13:46:49.935817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.867 [2024-12-06 13:46:49.935830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:56.867 [2024-12-06 13:46:49.935841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.867 [2024-12-06 13:46:49.935851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.867 [2024-12-06 13:46:49.935904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:56.867 [2024-12-06 13:46:49.935921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:56.867 [2024-12-06 13:46:49.935933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:56.867 [2024-12-06 13:46:49.935948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:56.867 [2024-12-06 13:46:49.936104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 538.059 ms, result 0 00:51:58.249 00:51:58.249 00:51:58.249 13:46:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:52:00.151 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:52:00.151 13:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:52:00.151 13:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:52:00.151 13:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:52:00.151 13:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:52:00.151 13:46:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:52:00.410 13:46:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:52:00.410 13:46:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:52:00.410 13:46:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81815 00:52:00.410 13:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81815 ']' 00:52:00.410 13:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81815 00:52:00.410 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81815) - No such process 00:52:00.410 Process with pid 81815 is not found 00:52:00.410 13:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81815 is not found' 00:52:00.410 13:46:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:52:00.669 13:46:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:52:00.669 13:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:52:00.669 Remove shared memory files 00:52:00.669 13:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:52:00.669 13:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:52:00.669 13:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:52:00.669 13:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:52:00.669 13:46:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:52:00.669 00:52:00.669 real 3m29.751s 00:52:00.669 user 3m57.638s 00:52:00.669 sys 0m40.694s 00:52:00.669 13:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:00.669 13:46:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:52:00.669 ************************************ 00:52:00.669 END TEST ftl_dirty_shutdown 00:52:00.669 ************************************ 00:52:00.669 13:46:53 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:52:00.669 13:46:53 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:52:00.669 13:46:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:00.669 13:46:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:52:00.669 ************************************ 00:52:00.669 START TEST ftl_upgrade_shutdown 00:52:00.669 ************************************ 00:52:00.669 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:52:00.928 * Looking for test storage... 00:52:00.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:52:00.928 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:00.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:00.929 --rc genhtml_branch_coverage=1 00:52:00.929 --rc genhtml_function_coverage=1 00:52:00.929 --rc genhtml_legend=1 00:52:00.929 --rc geninfo_all_blocks=1 00:52:00.929 --rc geninfo_unexecuted_blocks=1 00:52:00.929 00:52:00.929 ' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:00.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:00.929 --rc genhtml_branch_coverage=1 00:52:00.929 --rc genhtml_function_coverage=1 00:52:00.929 --rc genhtml_legend=1 00:52:00.929 --rc geninfo_all_blocks=1 00:52:00.929 --rc geninfo_unexecuted_blocks=1 00:52:00.929 00:52:00.929 ' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:00.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:00.929 --rc genhtml_branch_coverage=1 00:52:00.929 --rc genhtml_function_coverage=1 00:52:00.929 --rc genhtml_legend=1 00:52:00.929 --rc geninfo_all_blocks=1 00:52:00.929 --rc geninfo_unexecuted_blocks=1 00:52:00.929 00:52:00.929 ' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:00.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:00.929 --rc genhtml_branch_coverage=1 00:52:00.929 --rc genhtml_function_coverage=1 00:52:00.929 --rc genhtml_legend=1 00:52:00.929 --rc geninfo_all_blocks=1 00:52:00.929 --rc geninfo_unexecuted_blocks=1 00:52:00.929 00:52:00.929 ' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84067 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84067 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84067 ']' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:52:00.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:00.929 13:46:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:52:01.188 [2024-12-06 13:46:54.068157] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:52:01.188 [2024-12-06 13:46:54.068292] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84067 ] 00:52:01.188 [2024-12-06 13:46:54.258657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:01.452 [2024-12-06 13:46:54.455846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:52:02.448 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:52:03.013 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:52:03.013 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:52:03.013 13:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:52:03.014 13:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:52:03.014 13:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:52:03.014 13:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:52:03.014 13:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:52:03.014 13:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:52:03.014 13:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:52:03.014 { 00:52:03.014 "name": "basen1", 00:52:03.014 "aliases": [ 00:52:03.014 "5b0c8337-ba96-4312-82cb-f04deeeeaf7b" 00:52:03.014 ], 00:52:03.014 "product_name": "NVMe disk", 00:52:03.014 "block_size": 4096, 00:52:03.014 "num_blocks": 1310720, 00:52:03.014 "uuid": "5b0c8337-ba96-4312-82cb-f04deeeeaf7b", 00:52:03.014 "numa_id": -1, 00:52:03.014 "assigned_rate_limits": { 00:52:03.014 "rw_ios_per_sec": 0, 00:52:03.014 "rw_mbytes_per_sec": 0, 00:52:03.014 "r_mbytes_per_sec": 0, 00:52:03.014 "w_mbytes_per_sec": 0 00:52:03.014 }, 00:52:03.014 "claimed": true, 00:52:03.014 "claim_type": "read_many_write_one", 00:52:03.014 "zoned": false, 00:52:03.014 "supported_io_types": { 00:52:03.014 "read": true, 00:52:03.014 "write": true, 00:52:03.014 "unmap": true, 00:52:03.014 "flush": true, 00:52:03.014 "reset": true, 00:52:03.014 "nvme_admin": true, 00:52:03.014 "nvme_io": true, 00:52:03.014 "nvme_io_md": false, 00:52:03.014 "write_zeroes": true, 00:52:03.014 "zcopy": false, 00:52:03.014 "get_zone_info": false, 00:52:03.014 "zone_management": false, 00:52:03.014 "zone_append": false, 00:52:03.014 "compare": true, 00:52:03.014 "compare_and_write": false, 00:52:03.014 "abort": true, 00:52:03.014 "seek_hole": false, 00:52:03.014 "seek_data": false, 00:52:03.014 "copy": true, 00:52:03.014 "nvme_iov_md": false 00:52:03.014 }, 00:52:03.014 "driver_specific": { 00:52:03.014 "nvme": [ 00:52:03.014 { 00:52:03.014 "pci_address": "0000:00:11.0", 00:52:03.014 "trid": { 00:52:03.014 "trtype": "PCIe", 00:52:03.014 "traddr": "0000:00:11.0" 00:52:03.014 }, 00:52:03.014 "ctrlr_data": { 00:52:03.014 "cntlid": 0, 00:52:03.014 "vendor_id": "0x1b36", 00:52:03.014 "model_number": "QEMU NVMe Ctrl", 00:52:03.014 "serial_number": "12341", 00:52:03.014 "firmware_revision": "8.0.0", 00:52:03.014 "subnqn": "nqn.2019-08.org.qemu:12341", 00:52:03.014 "oacs": { 00:52:03.014 "security": 0, 00:52:03.014 "format": 1, 00:52:03.014 "firmware": 0, 00:52:03.014 "ns_manage": 1 00:52:03.014 }, 00:52:03.014 "multi_ctrlr": false, 00:52:03.014 "ana_reporting": false 00:52:03.014 }, 00:52:03.014 "vs": { 00:52:03.014 "nvme_version": "1.4" 00:52:03.014 }, 00:52:03.014 "ns_data": { 00:52:03.014 "id": 1, 00:52:03.014 "can_share": false 00:52:03.014 } 00:52:03.014 } 00:52:03.014 ], 00:52:03.014 "mp_policy": "active_passive" 00:52:03.014 } 00:52:03.014 } 00:52:03.014 ]' 00:52:03.014 13:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:52:03.272 13:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:52:03.272 13:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:52:03.272 13:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:52:03.272 13:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:52:03.272 13:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:52:03.272 13:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:52:03.272 13:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:52:03.272 13:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:52:03.272 13:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:52:03.272 13:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:52:03.529 13:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=3bd5adc2-1a71-456f-a1a7-e9515671ce6a 00:52:03.529 13:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:52:03.529 13:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3bd5adc2-1a71-456f-a1a7-e9515671ce6a 00:52:03.788 13:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:52:04.047 13:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=ccf820a2-3a25-40c9-8280-a3d7d754b9dc 00:52:04.047 13:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u ccf820a2-3a25-40c9-8280-a3d7d754b9dc 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f26ee883-1ec8-4049-9100-6903a6d791c2 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f26ee883-1ec8-4049-9100-6903a6d791c2 ]] 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f26ee883-1ec8-4049-9100-6903a6d791c2 5120 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f26ee883-1ec8-4049-9100-6903a6d791c2 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f26ee883-1ec8-4049-9100-6903a6d791c2 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f26ee883-1ec8-4049-9100-6903a6d791c2 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:52:04.047 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f26ee883-1ec8-4049-9100-6903a6d791c2 00:52:04.306 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:52:04.306 { 00:52:04.306 "name": "f26ee883-1ec8-4049-9100-6903a6d791c2", 00:52:04.306 "aliases": [ 00:52:04.306 "lvs/basen1p0" 00:52:04.306 ], 00:52:04.306 "product_name": "Logical Volume", 00:52:04.306 "block_size": 4096, 00:52:04.306 "num_blocks": 5242880, 00:52:04.306 "uuid": "f26ee883-1ec8-4049-9100-6903a6d791c2", 00:52:04.306 "assigned_rate_limits": { 00:52:04.306 "rw_ios_per_sec": 0, 00:52:04.306 "rw_mbytes_per_sec": 0, 00:52:04.306 "r_mbytes_per_sec": 0, 00:52:04.306 "w_mbytes_per_sec": 0 00:52:04.306 }, 00:52:04.306 "claimed": false, 00:52:04.306 "zoned": false, 00:52:04.306 "supported_io_types": { 00:52:04.306 "read": true, 00:52:04.306 "write": true, 00:52:04.306 "unmap": true, 00:52:04.306 "flush": false, 00:52:04.306 "reset": true, 00:52:04.306 "nvme_admin": false, 00:52:04.306 "nvme_io": false, 00:52:04.306 "nvme_io_md": false, 00:52:04.306 "write_zeroes": true, 00:52:04.306 "zcopy": false, 00:52:04.306 "get_zone_info": false, 00:52:04.306 "zone_management": false, 00:52:04.306 "zone_append": false, 00:52:04.306 "compare": false, 00:52:04.306 "compare_and_write": false, 00:52:04.306 "abort": false, 00:52:04.306 "seek_hole": true, 00:52:04.306 "seek_data": true, 00:52:04.306 "copy": false, 00:52:04.306 "nvme_iov_md": false 00:52:04.306 }, 00:52:04.306 "driver_specific": { 00:52:04.306 "lvol": { 00:52:04.306 "lvol_store_uuid": "ccf820a2-3a25-40c9-8280-a3d7d754b9dc", 00:52:04.306 "base_bdev": "basen1", 00:52:04.306 "thin_provision": true, 00:52:04.306 "num_allocated_clusters": 0, 00:52:04.306 "snapshot": false, 00:52:04.306 "clone": false, 00:52:04.306 "esnap_clone": false 00:52:04.306 } 00:52:04.306 } 00:52:04.306 } 00:52:04.306 ]' 00:52:04.306 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:52:04.306 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:52:04.306 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:52:04.306 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:52:04.306 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:52:04.306 13:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:52:04.306 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:52:04.306 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:52:04.306 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:52:04.874 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:52:04.874 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:52:04.874 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:52:04.874 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:52:04.874 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:52:04.874 13:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f26ee883-1ec8-4049-9100-6903a6d791c2 -c cachen1p0 --l2p_dram_limit 2 00:52:05.134 [2024-12-06 13:46:58.111365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.134 [2024-12-06 13:46:58.111435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:52:05.134 [2024-12-06 13:46:58.111482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:52:05.134 [2024-12-06 13:46:58.111494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.134 [2024-12-06 13:46:58.111613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.134 [2024-12-06 13:46:58.111626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:52:05.134 [2024-12-06 13:46:58.111641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:52:05.134 [2024-12-06 13:46:58.111653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.134 [2024-12-06 13:46:58.111680] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:52:05.134 [2024-12-06 13:46:58.112851] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:52:05.134 [2024-12-06 13:46:58.112888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.134 [2024-12-06 13:46:58.112899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:52:05.134 [2024-12-06 13:46:58.112915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.209 ms 00:52:05.134 [2024-12-06 13:46:58.112926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.134 [2024-12-06 13:46:58.113015] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID bde2c8f6-55e5-4ae9-a512-4041382f02c9 00:52:05.134 [2024-12-06 13:46:58.115612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.134 [2024-12-06 13:46:58.115655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:52:05.134 [2024-12-06 13:46:58.115669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:52:05.134 [2024-12-06 13:46:58.115683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.134 [2024-12-06 13:46:58.130499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.134 [2024-12-06 13:46:58.130561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:52:05.134 [2024-12-06 13:46:58.130576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.748 ms 00:52:05.134 [2024-12-06 13:46:58.130591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.134 [2024-12-06 13:46:58.130645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.134 [2024-12-06 13:46:58.130663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:52:05.134 [2024-12-06 13:46:58.130675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:52:05.134 [2024-12-06 13:46:58.130695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.134 [2024-12-06 13:46:58.130778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.134 [2024-12-06 13:46:58.130797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:52:05.134 [2024-12-06 13:46:58.130812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:52:05.134 [2024-12-06 13:46:58.130825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.134 [2024-12-06 13:46:58.130855] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:52:05.134 [2024-12-06 13:46:58.137544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.134 [2024-12-06 13:46:58.137578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:52:05.134 [2024-12-06 13:46:58.137596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.695 ms 00:52:05.134 [2024-12-06 13:46:58.137623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.134 [2024-12-06 13:46:58.137693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.134 [2024-12-06 13:46:58.137705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:52:05.134 [2024-12-06 13:46:58.137720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:52:05.134 [2024-12-06 13:46:58.137732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.134 [2024-12-06 13:46:58.137772] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:52:05.134 [2024-12-06 13:46:58.137924] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:52:05.134 [2024-12-06 13:46:58.137947] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:52:05.134 [2024-12-06 13:46:58.137962] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:52:05.134 [2024-12-06 13:46:58.137979] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:52:05.134 [2024-12-06 13:46:58.137992] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:52:05.134 [2024-12-06 13:46:58.138010] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:52:05.134 [2024-12-06 13:46:58.138021] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:52:05.134 [2024-12-06 13:46:58.138039] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:52:05.134 [2024-12-06 13:46:58.138050] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:52:05.134 [2024-12-06 13:46:58.138064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.134 [2024-12-06 13:46:58.138075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:52:05.134 [2024-12-06 13:46:58.138090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.294 ms 00:52:05.134 [2024-12-06 13:46:58.138100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.134 [2024-12-06 13:46:58.138184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.134 [2024-12-06 13:46:58.138208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:52:05.134 [2024-12-06 13:46:58.138223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:52:05.134 [2024-12-06 13:46:58.138234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.134 [2024-12-06 13:46:58.138336] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:52:05.134 [2024-12-06 13:46:58.138349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:52:05.134 [2024-12-06 13:46:58.138363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:05.134 [2024-12-06 13:46:58.138374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:05.134 [2024-12-06 13:46:58.138388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:52:05.134 [2024-12-06 13:46:58.138398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:52:05.134 [2024-12-06 13:46:58.138422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:52:05.134 [2024-12-06 13:46:58.138432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:52:05.134 [2024-12-06 13:46:58.138447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:52:05.134 [2024-12-06 13:46:58.138456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:05.134 [2024-12-06 13:46:58.138470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:52:05.134 [2024-12-06 13:46:58.138479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:52:05.134 [2024-12-06 13:46:58.138493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:05.134 [2024-12-06 13:46:58.138502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:52:05.134 [2024-12-06 13:46:58.138516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:52:05.134 [2024-12-06 13:46:58.138526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:05.134 [2024-12-06 13:46:58.138542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:52:05.134 [2024-12-06 13:46:58.138552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:52:05.134 [2024-12-06 13:46:58.138564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:05.134 [2024-12-06 13:46:58.138574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:52:05.134 [2024-12-06 13:46:58.138587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:52:05.134 [2024-12-06 13:46:58.138596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:05.134 [2024-12-06 13:46:58.138608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:52:05.135 [2024-12-06 13:46:58.138618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:52:05.135 [2024-12-06 13:46:58.138631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:05.135 [2024-12-06 13:46:58.138641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:52:05.135 [2024-12-06 13:46:58.138654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:52:05.135 [2024-12-06 13:46:58.138663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:05.135 [2024-12-06 13:46:58.138676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:52:05.135 [2024-12-06 13:46:58.138685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:52:05.135 [2024-12-06 13:46:58.138697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:05.135 [2024-12-06 13:46:58.138707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:52:05.135 [2024-12-06 13:46:58.138722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:52:05.135 [2024-12-06 13:46:58.138732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:05.135 [2024-12-06 13:46:58.138746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:52:05.135 [2024-12-06 13:46:58.138755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:52:05.135 [2024-12-06 13:46:58.138767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:05.135 [2024-12-06 13:46:58.138777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:52:05.135 [2024-12-06 13:46:58.138789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:52:05.135 [2024-12-06 13:46:58.138798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:05.135 [2024-12-06 13:46:58.138811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:52:05.135 [2024-12-06 13:46:58.138820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:52:05.135 [2024-12-06 13:46:58.138832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:05.135 [2024-12-06 13:46:58.138842] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:52:05.135 [2024-12-06 13:46:58.138856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:52:05.135 [2024-12-06 13:46:58.138866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:05.135 [2024-12-06 13:46:58.138882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:05.135 [2024-12-06 13:46:58.138892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:52:05.135 [2024-12-06 13:46:58.138910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:52:05.135 [2024-12-06 13:46:58.138919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:52:05.135 [2024-12-06 13:46:58.138933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:52:05.135 [2024-12-06 13:46:58.138943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:52:05.135 [2024-12-06 13:46:58.138956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:52:05.135 [2024-12-06 13:46:58.138968] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:52:05.135 [2024-12-06 13:46:58.138988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:05.135 [2024-12-06 13:46:58.139000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:52:05.135 [2024-12-06 13:46:58.139015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:52:05.135 [2024-12-06 13:46:58.139025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:52:05.135 [2024-12-06 13:46:58.139039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:52:05.135 [2024-12-06 13:46:58.139050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:52:05.135 [2024-12-06 13:46:58.139066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:52:05.135 [2024-12-06 13:46:58.139076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:52:05.135 [2024-12-06 13:46:58.139089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:52:05.135 [2024-12-06 13:46:58.139100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:52:05.135 [2024-12-06 13:46:58.139117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:52:05.135 [2024-12-06 13:46:58.139127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:52:05.135 [2024-12-06 13:46:58.139141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:52:05.135 [2024-12-06 13:46:58.139152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:52:05.135 [2024-12-06 13:46:58.139166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:52:05.135 [2024-12-06 13:46:58.139176] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:52:05.135 [2024-12-06 13:46:58.139191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:05.135 [2024-12-06 13:46:58.139203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:52:05.135 [2024-12-06 13:46:58.139216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:52:05.135 [2024-12-06 13:46:58.139226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:52:05.135 [2024-12-06 13:46:58.139240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:52:05.135 [2024-12-06 13:46:58.139251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:05.135 [2024-12-06 13:46:58.139265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:52:05.135 [2024-12-06 13:46:58.139275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.977 ms 00:52:05.135 [2024-12-06 13:46:58.139294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:05.135 [2024-12-06 13:46:58.139342] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:52:05.135 [2024-12-06 13:46:58.139363] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:52:10.406 [2024-12-06 13:47:02.914999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.406 [2024-12-06 13:47:02.915074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:52:10.406 [2024-12-06 13:47:02.915092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4775.627 ms 00:52:10.406 [2024-12-06 13:47:02.915107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.406 [2024-12-06 13:47:02.962619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.406 [2024-12-06 13:47:02.962685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:52:10.406 [2024-12-06 13:47:02.962703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.976 ms 00:52:10.406 [2024-12-06 13:47:02.962718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.406 [2024-12-06 13:47:02.962827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.406 [2024-12-06 13:47:02.962843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:52:10.406 [2024-12-06 13:47:02.962855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:52:10.406 [2024-12-06 13:47:02.962878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.406 [2024-12-06 13:47:03.016036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.406 [2024-12-06 13:47:03.016106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:52:10.406 [2024-12-06 13:47:03.016121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.089 ms 00:52:10.406 [2024-12-06 13:47:03.016136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.406 [2024-12-06 13:47:03.016178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.406 [2024-12-06 13:47:03.016199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:52:10.406 [2024-12-06 13:47:03.016211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:52:10.406 [2024-12-06 13:47:03.016225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.406 [2024-12-06 13:47:03.017079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.406 [2024-12-06 13:47:03.017099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:52:10.406 [2024-12-06 13:47:03.017123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.788 ms 00:52:10.406 [2024-12-06 13:47:03.017138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.406 [2024-12-06 13:47:03.017182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.017197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:52:10.407 [2024-12-06 13:47:03.017212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:52:10.407 [2024-12-06 13:47:03.017229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.042879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.042930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:52:10.407 [2024-12-06 13:47:03.042944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.628 ms 00:52:10.407 [2024-12-06 13:47:03.042958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.070372] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:52:10.407 [2024-12-06 13:47:03.072221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.072252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:52:10.407 [2024-12-06 13:47:03.072273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.160 ms 00:52:10.407 [2024-12-06 13:47:03.072286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.117561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.117608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:52:10.407 [2024-12-06 13:47:03.117629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.233 ms 00:52:10.407 [2024-12-06 13:47:03.117640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.117744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.117762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:52:10.407 [2024-12-06 13:47:03.117781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:52:10.407 [2024-12-06 13:47:03.117791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.153831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.153868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:52:10.407 [2024-12-06 13:47:03.153885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.984 ms 00:52:10.407 [2024-12-06 13:47:03.153897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.189498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.189532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:52:10.407 [2024-12-06 13:47:03.189549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.550 ms 00:52:10.407 [2024-12-06 13:47:03.189559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.190325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.190341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:52:10.407 [2024-12-06 13:47:03.190356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.725 ms 00:52:10.407 [2024-12-06 13:47:03.190370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.319994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.320053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:52:10.407 [2024-12-06 13:47:03.320080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 129.532 ms 00:52:10.407 [2024-12-06 13:47:03.320092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.359575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.359653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:52:10.407 [2024-12-06 13:47:03.359674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.403 ms 00:52:10.407 [2024-12-06 13:47:03.359686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.397360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.397428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:52:10.407 [2024-12-06 13:47:03.397452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.636 ms 00:52:10.407 [2024-12-06 13:47:03.397462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.433814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.433854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:52:10.407 [2024-12-06 13:47:03.433873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.317 ms 00:52:10.407 [2024-12-06 13:47:03.433884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.433919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.433932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:52:10.407 [2024-12-06 13:47:03.433952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:52:10.407 [2024-12-06 13:47:03.433962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.434081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:10.407 [2024-12-06 13:47:03.434100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:52:10.407 [2024-12-06 13:47:03.434133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:52:10.407 [2024-12-06 13:47:03.434144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:10.407 [2024-12-06 13:47:03.435607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5323.682 ms, result 0 00:52:10.407 { 00:52:10.407 "name": "ftl", 00:52:10.407 "uuid": "bde2c8f6-55e5-4ae9-a512-4041382f02c9" 00:52:10.407 } 00:52:10.407 13:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:52:10.666 [2024-12-06 13:47:03.714631] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:10.666 13:47:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:52:10.925 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:52:11.185 [2024-12-06 13:47:04.170843] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:52:11.185 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:52:11.444 [2024-12-06 13:47:04.358119] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:52:11.444 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:52:11.704 Fill FTL, iteration 1 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84210 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84210 /var/tmp/spdk.tgt.sock 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84210 ']' 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:52:11.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:52:11.704 13:47:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:52:11.963 [2024-12-06 13:47:04.818096] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:52:11.963 [2024-12-06 13:47:04.818252] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84210 ] 00:52:11.963 [2024-12-06 13:47:04.996230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:12.223 [2024-12-06 13:47:05.136611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:13.162 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:13.162 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:52:13.162 13:47:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:52:13.421 ftln1 00:52:13.421 13:47:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:52:13.421 13:47:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84210 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84210 ']' 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84210 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84210 00:52:13.680 killing process with pid 84210 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84210' 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84210 00:52:13.680 13:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84210 00:52:16.968 13:47:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:52:16.968 13:47:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:52:16.968 [2024-12-06 13:47:09.439809] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:52:16.968 [2024-12-06 13:47:09.441078] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84270 ] 00:52:16.968 [2024-12-06 13:47:09.647398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:16.968 [2024-12-06 13:47:09.790345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:18.347  [2024-12-06T13:47:12.383Z] Copying: 243/1024 [MB] (243 MBps) [2024-12-06T13:47:13.346Z] Copying: 487/1024 [MB] (244 MBps) [2024-12-06T13:47:14.723Z] Copying: 730/1024 [MB] (243 MBps) [2024-12-06T13:47:14.723Z] Copying: 973/1024 [MB] (243 MBps) [2024-12-06T13:47:16.100Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:52:23.000 00:52:23.000 Calculate MD5 checksum, iteration 1 00:52:23.000 13:47:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:52:23.000 13:47:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:52:23.000 13:47:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:23.000 13:47:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:23.000 13:47:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:23.000 13:47:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:23.000 13:47:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:52:23.000 13:47:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:52:23.000 [2024-12-06 13:47:15.947165] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:52:23.000 [2024-12-06 13:47:15.948139] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84334 ] 00:52:23.259 [2024-12-06 13:47:16.147487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:23.259 [2024-12-06 13:47:16.293598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:25.163  [2024-12-06T13:47:18.520Z] Copying: 659/1024 [MB] (659 MBps) [2024-12-06T13:47:19.452Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:52:26.352 00:52:26.611 13:47:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:52:26.611 13:47:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:28.511 13:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:52:28.511 13:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c1afbe6ac8ea8df79649a1d2e04d37f7 00:52:28.511 13:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:52:28.511 13:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:52:28.511 13:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:52:28.511 Fill FTL, iteration 2 00:52:28.511 13:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:52:28.511 13:47:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:28.511 13:47:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:28.511 13:47:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:28.511 13:47:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:52:28.511 13:47:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:52:28.511 [2024-12-06 13:47:21.451451] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:52:28.511 [2024-12-06 13:47:21.451880] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84409 ] 00:52:28.770 [2024-12-06 13:47:21.637708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:28.770 [2024-12-06 13:47:21.783607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:30.675  [2024-12-06T13:47:24.345Z] Copying: 236/1024 [MB] (236 MBps) [2024-12-06T13:47:25.724Z] Copying: 460/1024 [MB] (224 MBps) [2024-12-06T13:47:26.659Z] Copying: 694/1024 [MB] (234 MBps) [2024-12-06T13:47:26.916Z] Copying: 928/1024 [MB] (234 MBps) [2024-12-06T13:47:28.292Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:52:35.192 00:52:35.192 Calculate MD5 checksum, iteration 2 00:52:35.192 13:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:52:35.192 13:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:52:35.192 13:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:35.192 13:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:35.192 13:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:35.192 13:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:35.192 13:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:52:35.192 13:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:35.192 [2024-12-06 13:47:28.153308] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:52:35.192 [2024-12-06 13:47:28.153719] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84483 ] 00:52:35.452 [2024-12-06 13:47:28.351293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:35.452 [2024-12-06 13:47:28.494628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:37.364  [2024-12-06T13:47:31.076Z] Copying: 666/1024 [MB] (666 MBps) [2024-12-06T13:47:32.467Z] Copying: 1024/1024 [MB] (average 658 MBps) 00:52:39.367 00:52:39.367 13:47:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:52:39.367 13:47:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:41.275 13:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:52:41.275 13:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c225590b15e4b8ba2e03e0861a8963e7 00:52:41.275 13:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:52:41.275 13:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:52:41.275 13:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:52:41.535 [2024-12-06 13:47:34.452403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:41.535 [2024-12-06 13:47:34.452477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:52:41.535 [2024-12-06 13:47:34.452513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:52:41.535 [2024-12-06 13:47:34.452524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:41.535 [2024-12-06 13:47:34.452555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:41.535 [2024-12-06 13:47:34.452582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:52:41.535 [2024-12-06 13:47:34.452594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:52:41.535 [2024-12-06 13:47:34.452606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:41.535 [2024-12-06 13:47:34.452627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:41.535 [2024-12-06 13:47:34.452638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:52:41.535 [2024-12-06 13:47:34.452649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:52:41.535 [2024-12-06 13:47:34.452659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:41.535 [2024-12-06 13:47:34.452733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.317 ms, result 0 00:52:41.535 true 00:52:41.535 13:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:41.795 { 00:52:41.795 "name": "ftl", 00:52:41.795 "properties": [ 00:52:41.795 { 00:52:41.795 "name": "superblock_version", 00:52:41.795 "value": 5, 00:52:41.795 "read-only": true 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "name": "base_device", 00:52:41.795 "bands": [ 00:52:41.795 { 00:52:41.795 "id": 0, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 1, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 2, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 3, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 4, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 5, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 6, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 7, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 8, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 9, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 10, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 11, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 12, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 13, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 14, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 15, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 16, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 }, 00:52:41.795 { 00:52:41.795 "id": 17, 00:52:41.795 "state": "FREE", 00:52:41.795 "validity": 0.0 00:52:41.795 } 00:52:41.795 ], 00:52:41.795 "read-only": true 00:52:41.795 }, 00:52:41.795 { 00:52:41.796 "name": "cache_device", 00:52:41.796 "type": "bdev", 00:52:41.796 "chunks": [ 00:52:41.796 { 00:52:41.796 "id": 0, 00:52:41.796 "state": "INACTIVE", 00:52:41.796 "utilization": 0.0 00:52:41.796 }, 00:52:41.796 { 00:52:41.796 "id": 1, 00:52:41.796 "state": "CLOSED", 00:52:41.796 "utilization": 1.0 00:52:41.796 }, 00:52:41.796 { 00:52:41.796 "id": 2, 00:52:41.796 "state": "CLOSED", 00:52:41.796 "utilization": 1.0 00:52:41.796 }, 00:52:41.796 { 00:52:41.796 "id": 3, 00:52:41.796 "state": "OPEN", 00:52:41.796 "utilization": 0.001953125 00:52:41.796 }, 00:52:41.796 { 00:52:41.796 "id": 4, 00:52:41.796 "state": "OPEN", 00:52:41.796 "utilization": 0.0 00:52:41.796 } 00:52:41.796 ], 00:52:41.796 "read-only": true 00:52:41.796 }, 00:52:41.796 { 00:52:41.796 "name": "verbose_mode", 00:52:41.796 "value": true, 00:52:41.796 "unit": "", 00:52:41.796 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:52:41.796 }, 00:52:41.796 { 00:52:41.796 "name": "prep_upgrade_on_shutdown", 00:52:41.796 "value": false, 00:52:41.796 "unit": "", 00:52:41.796 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:52:41.796 } 00:52:41.796 ] 00:52:41.796 } 00:52:41.796 13:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:52:42.055 [2024-12-06 13:47:34.999186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.055 [2024-12-06 13:47:34.999250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:52:42.056 [2024-12-06 13:47:34.999268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:52:42.056 [2024-12-06 13:47:34.999279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.056 [2024-12-06 13:47:34.999307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.056 [2024-12-06 13:47:34.999318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:52:42.056 [2024-12-06 13:47:34.999329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:52:42.056 [2024-12-06 13:47:34.999340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.056 [2024-12-06 13:47:34.999360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.056 [2024-12-06 13:47:34.999371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:52:42.056 [2024-12-06 13:47:34.999382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:52:42.056 [2024-12-06 13:47:34.999392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.056 [2024-12-06 13:47:34.999483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.285 ms, result 0 00:52:42.056 true 00:52:42.056 13:47:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:52:42.056 13:47:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:52:42.056 13:47:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:42.314 13:47:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:52:42.314 13:47:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:52:42.314 13:47:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:52:42.571 [2024-12-06 13:47:35.491640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.571 [2024-12-06 13:47:35.491926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:52:42.571 [2024-12-06 13:47:35.491953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:52:42.571 [2024-12-06 13:47:35.491965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.571 [2024-12-06 13:47:35.492007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.571 [2024-12-06 13:47:35.492019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:52:42.571 [2024-12-06 13:47:35.492031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:52:42.571 [2024-12-06 13:47:35.492042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.571 [2024-12-06 13:47:35.492063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:42.571 [2024-12-06 13:47:35.492074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:52:42.571 [2024-12-06 13:47:35.492085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:52:42.571 [2024-12-06 13:47:35.492096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:42.571 [2024-12-06 13:47:35.492167] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.512 ms, result 0 00:52:42.571 true 00:52:42.571 13:47:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:52:42.830 { 00:52:42.830 "name": "ftl", 00:52:42.830 "properties": [ 00:52:42.830 { 00:52:42.830 "name": "superblock_version", 00:52:42.830 "value": 5, 00:52:42.830 "read-only": true 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "name": "base_device", 00:52:42.830 "bands": [ 00:52:42.830 { 00:52:42.830 "id": 0, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 1, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 2, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 3, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 4, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 5, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 6, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 7, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 8, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 9, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 10, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 11, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 12, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 13, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 14, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 15, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 16, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 17, 00:52:42.830 "state": "FREE", 00:52:42.830 "validity": 0.0 00:52:42.830 } 00:52:42.830 ], 00:52:42.830 "read-only": true 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "name": "cache_device", 00:52:42.830 "type": "bdev", 00:52:42.830 "chunks": [ 00:52:42.830 { 00:52:42.830 "id": 0, 00:52:42.830 "state": "INACTIVE", 00:52:42.830 "utilization": 0.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 1, 00:52:42.830 "state": "CLOSED", 00:52:42.830 "utilization": 1.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 2, 00:52:42.830 "state": "CLOSED", 00:52:42.830 "utilization": 1.0 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 3, 00:52:42.830 "state": "OPEN", 00:52:42.830 "utilization": 0.001953125 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "id": 4, 00:52:42.830 "state": "OPEN", 00:52:42.830 "utilization": 0.0 00:52:42.830 } 00:52:42.830 ], 00:52:42.830 "read-only": true 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "name": "verbose_mode", 00:52:42.830 "value": true, 00:52:42.830 "unit": "", 00:52:42.830 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:52:42.830 }, 00:52:42.830 { 00:52:42.830 "name": "prep_upgrade_on_shutdown", 00:52:42.830 "value": true, 00:52:42.830 "unit": "", 00:52:42.830 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:52:42.830 } 00:52:42.830 ] 00:52:42.830 } 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84067 ]] 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84067 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84067 ']' 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84067 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84067 00:52:42.830 killing process with pid 84067 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84067' 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84067 00:52:42.830 13:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84067 00:52:44.208 [2024-12-06 13:47:37.028064] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:52:44.208 [2024-12-06 13:47:37.048004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:44.208 [2024-12-06 13:47:37.048049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:52:44.208 [2024-12-06 13:47:37.048067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:52:44.208 [2024-12-06 13:47:37.048078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:44.208 [2024-12-06 13:47:37.048105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:52:44.208 [2024-12-06 13:47:37.052803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:44.208 [2024-12-06 13:47:37.052830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:52:44.208 [2024-12-06 13:47:37.052844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.681 ms 00:52:44.208 [2024-12-06 13:47:37.052876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.326 [2024-12-06 13:47:44.983183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.326 [2024-12-06 13:47:44.983256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:52:52.326 [2024-12-06 13:47:44.983275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7930.234 ms 00:52:52.326 [2024-12-06 13:47:44.983292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.326 [2024-12-06 13:47:44.984452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.326 [2024-12-06 13:47:44.984479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:52:52.326 [2024-12-06 13:47:44.984492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.142 ms 00:52:52.326 [2024-12-06 13:47:44.984504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.326 [2024-12-06 13:47:44.985452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.326 [2024-12-06 13:47:44.985474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:52:52.326 [2024-12-06 13:47:44.985486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.917 ms 00:52:52.326 [2024-12-06 13:47:44.985497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.326 [2024-12-06 13:47:45.001316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.326 [2024-12-06 13:47:45.001350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:52:52.326 [2024-12-06 13:47:45.001364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.774 ms 00:52:52.326 [2024-12-06 13:47:45.001376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.326 [2024-12-06 13:47:45.011193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.326 [2024-12-06 13:47:45.011228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:52:52.326 [2024-12-06 13:47:45.011242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.769 ms 00:52:52.326 [2024-12-06 13:47:45.011253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.326 [2024-12-06 13:47:45.011333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.326 [2024-12-06 13:47:45.011346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:52:52.326 [2024-12-06 13:47:45.011365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:52:52.326 [2024-12-06 13:47:45.011375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.326 [2024-12-06 13:47:45.026534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.326 [2024-12-06 13:47:45.026566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:52:52.326 [2024-12-06 13:47:45.026579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.141 ms 00:52:52.326 [2024-12-06 13:47:45.026589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.326 [2024-12-06 13:47:45.041840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.326 [2024-12-06 13:47:45.041868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:52:52.326 [2024-12-06 13:47:45.041880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.216 ms 00:52:52.326 [2024-12-06 13:47:45.041890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.326 [2024-12-06 13:47:45.056295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.326 [2024-12-06 13:47:45.056323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:52:52.326 [2024-12-06 13:47:45.056337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.371 ms 00:52:52.326 [2024-12-06 13:47:45.056346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.326 [2024-12-06 13:47:45.070789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.326 [2024-12-06 13:47:45.070817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:52:52.326 [2024-12-06 13:47:45.070829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.357 ms 00:52:52.326 [2024-12-06 13:47:45.070838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.326 [2024-12-06 13:47:45.070872] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:52:52.326 [2024-12-06 13:47:45.070904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:52:52.326 [2024-12-06 13:47:45.070918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:52:52.326 [2024-12-06 13:47:45.070929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:52:52.326 [2024-12-06 13:47:45.070940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:52:52.326 [2024-12-06 13:47:45.070952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:52:52.326 [2024-12-06 13:47:45.070963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:52:52.326 [2024-12-06 13:47:45.070974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:52:52.326 [2024-12-06 13:47:45.070985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:52:52.326 [2024-12-06 13:47:45.070995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:52:52.326 [2024-12-06 13:47:45.071006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:52:52.326 [2024-12-06 13:47:45.071017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:52:52.326 [2024-12-06 13:47:45.071027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:52:52.326 [2024-12-06 13:47:45.071037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:52:52.326 [2024-12-06 13:47:45.071048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:52:52.327 [2024-12-06 13:47:45.071059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:52:52.327 [2024-12-06 13:47:45.071070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:52:52.327 [2024-12-06 13:47:45.071081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:52:52.327 [2024-12-06 13:47:45.071092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:52:52.327 [2024-12-06 13:47:45.071105] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:52:52.327 [2024-12-06 13:47:45.071116] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: bde2c8f6-55e5-4ae9-a512-4041382f02c9 00:52:52.327 [2024-12-06 13:47:45.071128] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:52:52.327 [2024-12-06 13:47:45.071138] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:52:52.327 [2024-12-06 13:47:45.071148] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:52:52.327 [2024-12-06 13:47:45.071159] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:52:52.327 [2024-12-06 13:47:45.071177] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:52:52.327 [2024-12-06 13:47:45.071192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:52:52.327 [2024-12-06 13:47:45.071202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:52:52.327 [2024-12-06 13:47:45.071211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:52:52.327 [2024-12-06 13:47:45.071221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:52:52.327 [2024-12-06 13:47:45.071236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.327 [2024-12-06 13:47:45.071252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:52:52.327 [2024-12-06 13:47:45.071264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:52:52.327 [2024-12-06 13:47:45.071275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.092355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.327 [2024-12-06 13:47:45.092384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:52:52.327 [2024-12-06 13:47:45.092422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.050 ms 00:52:52.327 [2024-12-06 13:47:45.092439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.093067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:52.327 [2024-12-06 13:47:45.093085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:52:52.327 [2024-12-06 13:47:45.093096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.606 ms 00:52:52.327 [2024-12-06 13:47:45.093107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.161708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.161743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:52:52.327 [2024-12-06 13:47:45.161762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.161772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.161809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.161820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:52:52.327 [2024-12-06 13:47:45.161831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.161841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.161917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.161931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:52:52.327 [2024-12-06 13:47:45.161942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.161958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.161977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.161988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:52:52.327 [2024-12-06 13:47:45.161999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.162009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.296245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.296310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:52:52.327 [2024-12-06 13:47:45.296326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.296345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.399689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.399771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:52:52.327 [2024-12-06 13:47:45.399788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.399800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.399942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.399956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:52:52.327 [2024-12-06 13:47:45.399967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.399978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.400040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.400053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:52:52.327 [2024-12-06 13:47:45.400064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.400074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.400233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.400247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:52:52.327 [2024-12-06 13:47:45.400259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.400269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.400310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.400330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:52:52.327 [2024-12-06 13:47:45.400341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.400366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.400428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.400446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:52:52.327 [2024-12-06 13:47:45.400457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.400468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.400525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:52.327 [2024-12-06 13:47:45.400541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:52:52.327 [2024-12-06 13:47:45.400553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:52.327 [2024-12-06 13:47:45.400565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:52.327 [2024-12-06 13:47:45.400713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8352.632 ms, result 0 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84695 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84695 00:52:56.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84695 ']' 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:56.523 13:47:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:52:56.523 [2024-12-06 13:47:49.203078] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:52:56.523 [2024-12-06 13:47:49.203242] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84695 ] 00:52:56.523 [2024-12-06 13:47:49.380560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:56.523 [2024-12-06 13:47:49.518257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:57.906 [2024-12-06 13:47:50.630994] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:52:57.906 [2024-12-06 13:47:50.631077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:52:57.906 [2024-12-06 13:47:50.779679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.779728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:52:57.906 [2024-12-06 13:47:50.779746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:52:57.906 [2024-12-06 13:47:50.779757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.779820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.779833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:52:57.906 [2024-12-06 13:47:50.779844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:52:57.906 [2024-12-06 13:47:50.779855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.779886] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:52:57.906 [2024-12-06 13:47:50.780891] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:52:57.906 [2024-12-06 13:47:50.780922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.780934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:52:57.906 [2024-12-06 13:47:50.780946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.047 ms 00:52:57.906 [2024-12-06 13:47:50.780956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.783471] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:52:57.906 [2024-12-06 13:47:50.803813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.803850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:52:57.906 [2024-12-06 13:47:50.803872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.342 ms 00:52:57.906 [2024-12-06 13:47:50.803884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.803956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.803969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:52:57.906 [2024-12-06 13:47:50.803981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:52:57.906 [2024-12-06 13:47:50.803992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.816810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.816841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:52:57.906 [2024-12-06 13:47:50.816855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.736 ms 00:52:57.906 [2024-12-06 13:47:50.816865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.816959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.816973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:52:57.906 [2024-12-06 13:47:50.816986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:52:57.906 [2024-12-06 13:47:50.816996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.817064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.817082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:52:57.906 [2024-12-06 13:47:50.817093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:52:57.906 [2024-12-06 13:47:50.817104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.817134] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:52:57.906 [2024-12-06 13:47:50.823043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.823074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:52:57.906 [2024-12-06 13:47:50.823102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.917 ms 00:52:57.906 [2024-12-06 13:47:50.823117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.823151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.823163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:52:57.906 [2024-12-06 13:47:50.823174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:52:57.906 [2024-12-06 13:47:50.823186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.823227] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:52:57.906 [2024-12-06 13:47:50.823261] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:52:57.906 [2024-12-06 13:47:50.823299] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:52:57.906 [2024-12-06 13:47:50.823318] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:52:57.906 [2024-12-06 13:47:50.823451] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:52:57.906 [2024-12-06 13:47:50.823467] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:52:57.906 [2024-12-06 13:47:50.823481] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:52:57.906 [2024-12-06 13:47:50.823495] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:52:57.906 [2024-12-06 13:47:50.823508] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:52:57.906 [2024-12-06 13:47:50.823525] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:52:57.906 [2024-12-06 13:47:50.823535] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:52:57.906 [2024-12-06 13:47:50.823546] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:52:57.906 [2024-12-06 13:47:50.823557] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:52:57.906 [2024-12-06 13:47:50.823568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.823579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:52:57.906 [2024-12-06 13:47:50.823590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:52:57.906 [2024-12-06 13:47:50.823600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.823678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.906 [2024-12-06 13:47:50.823690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:52:57.906 [2024-12-06 13:47:50.823705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:52:57.906 [2024-12-06 13:47:50.823715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.906 [2024-12-06 13:47:50.823815] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:52:57.906 [2024-12-06 13:47:50.823836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:52:57.906 [2024-12-06 13:47:50.823847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:57.907 [2024-12-06 13:47:50.823858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:57.907 [2024-12-06 13:47:50.823869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:52:57.907 [2024-12-06 13:47:50.823879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:52:57.907 [2024-12-06 13:47:50.823889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:52:57.907 [2024-12-06 13:47:50.823899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:52:57.907 [2024-12-06 13:47:50.823909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:52:57.907 [2024-12-06 13:47:50.823919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:57.907 [2024-12-06 13:47:50.823933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:52:57.907 [2024-12-06 13:47:50.823944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:52:57.907 [2024-12-06 13:47:50.823953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:57.907 [2024-12-06 13:47:50.823963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:52:57.907 [2024-12-06 13:47:50.823973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:52:57.907 [2024-12-06 13:47:50.823982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:57.907 [2024-12-06 13:47:50.823992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:52:57.907 [2024-12-06 13:47:50.824002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:52:57.907 [2024-12-06 13:47:50.824011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:57.907 [2024-12-06 13:47:50.824021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:52:57.907 [2024-12-06 13:47:50.824030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:52:57.907 [2024-12-06 13:47:50.824039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:57.907 [2024-12-06 13:47:50.824049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:52:57.907 [2024-12-06 13:47:50.824073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:52:57.907 [2024-12-06 13:47:50.824082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:57.907 [2024-12-06 13:47:50.824092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:52:57.907 [2024-12-06 13:47:50.824101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:52:57.907 [2024-12-06 13:47:50.824111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:57.907 [2024-12-06 13:47:50.824120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:52:57.907 [2024-12-06 13:47:50.824130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:52:57.907 [2024-12-06 13:47:50.824140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:52:57.907 [2024-12-06 13:47:50.824150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:52:57.907 [2024-12-06 13:47:50.824159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:52:57.907 [2024-12-06 13:47:50.824168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:57.907 [2024-12-06 13:47:50.824178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:52:57.907 [2024-12-06 13:47:50.824187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:52:57.907 [2024-12-06 13:47:50.824197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:57.907 [2024-12-06 13:47:50.824206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:52:57.907 [2024-12-06 13:47:50.824216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:52:57.907 [2024-12-06 13:47:50.824225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:57.907 [2024-12-06 13:47:50.824234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:52:57.907 [2024-12-06 13:47:50.824243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:52:57.907 [2024-12-06 13:47:50.824255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:57.907 [2024-12-06 13:47:50.824264] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:52:57.907 [2024-12-06 13:47:50.824275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:52:57.907 [2024-12-06 13:47:50.824286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:52:57.907 [2024-12-06 13:47:50.824296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:52:57.907 [2024-12-06 13:47:50.824311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:52:57.907 [2024-12-06 13:47:50.824321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:52:57.907 [2024-12-06 13:47:50.824332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:52:57.907 [2024-12-06 13:47:50.824342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:52:57.907 [2024-12-06 13:47:50.824351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:52:57.907 [2024-12-06 13:47:50.824361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:52:57.907 [2024-12-06 13:47:50.824372] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:52:57.907 [2024-12-06 13:47:50.824385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:57.907 [2024-12-06 13:47:50.824414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:52:57.907 [2024-12-06 13:47:50.824426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:52:57.907 [2024-12-06 13:47:50.824437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:52:57.907 [2024-12-06 13:47:50.824448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:52:57.907 [2024-12-06 13:47:50.824460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:52:57.907 [2024-12-06 13:47:50.824475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:52:57.907 [2024-12-06 13:47:50.824486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:52:57.907 [2024-12-06 13:47:50.824497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:52:57.907 [2024-12-06 13:47:50.824508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:52:57.907 [2024-12-06 13:47:50.824518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:52:57.907 [2024-12-06 13:47:50.824529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:52:57.907 [2024-12-06 13:47:50.824539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:52:57.907 [2024-12-06 13:47:50.824549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:52:57.907 [2024-12-06 13:47:50.824560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:52:57.907 [2024-12-06 13:47:50.824570] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:52:57.907 [2024-12-06 13:47:50.824582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:57.907 [2024-12-06 13:47:50.824593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:52:57.907 [2024-12-06 13:47:50.824604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:52:57.907 [2024-12-06 13:47:50.824614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:52:57.907 [2024-12-06 13:47:50.824628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:52:57.907 [2024-12-06 13:47:50.824640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:57.907 [2024-12-06 13:47:50.824651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:52:57.907 [2024-12-06 13:47:50.824661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.882 ms 00:52:57.907 [2024-12-06 13:47:50.824671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:57.907 [2024-12-06 13:47:50.824725] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:52:57.907 [2024-12-06 13:47:50.824739] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:53:01.196 [2024-12-06 13:47:54.159288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.196 [2024-12-06 13:47:54.159371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:53:01.196 [2024-12-06 13:47:54.159409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3334.542 ms 00:53:01.196 [2024-12-06 13:47:54.159435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.196 [2024-12-06 13:47:54.207558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.196 [2024-12-06 13:47:54.207623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:53:01.196 [2024-12-06 13:47:54.207644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.698 ms 00:53:01.196 [2024-12-06 13:47:54.207656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.196 [2024-12-06 13:47:54.207790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.196 [2024-12-06 13:47:54.207813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:53:01.196 [2024-12-06 13:47:54.207825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:53:01.196 [2024-12-06 13:47:54.207836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.196 [2024-12-06 13:47:54.262783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.196 [2024-12-06 13:47:54.262840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:53:01.196 [2024-12-06 13:47:54.262878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.872 ms 00:53:01.196 [2024-12-06 13:47:54.262889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.196 [2024-12-06 13:47:54.262944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.196 [2024-12-06 13:47:54.262955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:53:01.197 [2024-12-06 13:47:54.262967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:53:01.197 [2024-12-06 13:47:54.262978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.197 [2024-12-06 13:47:54.263868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.197 [2024-12-06 13:47:54.263889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:53:01.197 [2024-12-06 13:47:54.263902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.819 ms 00:53:01.197 [2024-12-06 13:47:54.263914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.197 [2024-12-06 13:47:54.263969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.197 [2024-12-06 13:47:54.263981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:53:01.197 [2024-12-06 13:47:54.263992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:53:01.197 [2024-12-06 13:47:54.264002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.197 [2024-12-06 13:47:54.289947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.197 [2024-12-06 13:47:54.290000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:53:01.197 [2024-12-06 13:47:54.290016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.918 ms 00:53:01.197 [2024-12-06 13:47:54.290027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.456 [2024-12-06 13:47:54.321458] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:53:01.456 [2024-12-06 13:47:54.321499] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:53:01.456 [2024-12-06 13:47:54.321516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.321529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:53:01.457 [2024-12-06 13:47:54.321542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.349 ms 00:53:01.457 [2024-12-06 13:47:54.321553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.342458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.342512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:53:01.457 [2024-12-06 13:47:54.342528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.856 ms 00:53:01.457 [2024-12-06 13:47:54.342540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.361941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.361981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:53:01.457 [2024-12-06 13:47:54.362012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.347 ms 00:53:01.457 [2024-12-06 13:47:54.362023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.380097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.380131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:53:01.457 [2024-12-06 13:47:54.380146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.028 ms 00:53:01.457 [2024-12-06 13:47:54.380156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.381036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.381068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:53:01.457 [2024-12-06 13:47:54.381082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.764 ms 00:53:01.457 [2024-12-06 13:47:54.381093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.479546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.479624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:53:01.457 [2024-12-06 13:47:54.479645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 98.421 ms 00:53:01.457 [2024-12-06 13:47:54.479657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.491303] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:53:01.457 [2024-12-06 13:47:54.492894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.492934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:53:01.457 [2024-12-06 13:47:54.492949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.170 ms 00:53:01.457 [2024-12-06 13:47:54.492976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.493109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.493129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:53:01.457 [2024-12-06 13:47:54.493142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:53:01.457 [2024-12-06 13:47:54.493152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.493246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.493277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:53:01.457 [2024-12-06 13:47:54.493289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:53:01.457 [2024-12-06 13:47:54.493300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.493330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.493343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:53:01.457 [2024-12-06 13:47:54.493359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:01.457 [2024-12-06 13:47:54.493369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.493416] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:53:01.457 [2024-12-06 13:47:54.493429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.493453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:53:01.457 [2024-12-06 13:47:54.493465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:53:01.457 [2024-12-06 13:47:54.493476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.532027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.532078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:53:01.457 [2024-12-06 13:47:54.532096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.525 ms 00:53:01.457 [2024-12-06 13:47:54.532109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.532200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.457 [2024-12-06 13:47:54.532215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:53:01.457 [2024-12-06 13:47:54.532228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:53:01.457 [2024-12-06 13:47:54.532240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.457 [2024-12-06 13:47:54.533847] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3753.565 ms, result 0 00:53:01.457 [2024-12-06 13:47:54.548478] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:01.717 [2024-12-06 13:47:54.564478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:53:01.717 [2024-12-06 13:47:54.574739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:53:01.717 13:47:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:01.717 13:47:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:53:01.717 13:47:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:53:01.717 13:47:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:53:01.717 13:47:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:53:01.976 [2024-12-06 13:47:54.858763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.976 [2024-12-06 13:47:54.858838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:53:01.976 [2024-12-06 13:47:54.858862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:53:01.976 [2024-12-06 13:47:54.858874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.976 [2024-12-06 13:47:54.858903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.976 [2024-12-06 13:47:54.858916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:53:01.976 [2024-12-06 13:47:54.858928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:53:01.976 [2024-12-06 13:47:54.858939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.977 [2024-12-06 13:47:54.858960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:01.977 [2024-12-06 13:47:54.858972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:53:01.977 [2024-12-06 13:47:54.858983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:53:01.977 [2024-12-06 13:47:54.858995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:01.977 [2024-12-06 13:47:54.859068] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.296 ms, result 0 00:53:01.977 true 00:53:01.977 13:47:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:53:01.977 { 00:53:01.977 "name": "ftl", 00:53:01.977 "properties": [ 00:53:01.977 { 00:53:01.977 "name": "superblock_version", 00:53:01.977 "value": 5, 00:53:01.977 "read-only": true 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "name": "base_device", 00:53:01.977 "bands": [ 00:53:01.977 { 00:53:01.977 "id": 0, 00:53:01.977 "state": "CLOSED", 00:53:01.977 "validity": 1.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 1, 00:53:01.977 "state": "CLOSED", 00:53:01.977 "validity": 1.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 2, 00:53:01.977 "state": "CLOSED", 00:53:01.977 "validity": 0.007843137254901933 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 3, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 4, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 5, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 6, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 7, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 8, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 9, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 10, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 11, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 12, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 13, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 14, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 15, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 16, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 17, 00:53:01.977 "state": "FREE", 00:53:01.977 "validity": 0.0 00:53:01.977 } 00:53:01.977 ], 00:53:01.977 "read-only": true 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "name": "cache_device", 00:53:01.977 "type": "bdev", 00:53:01.977 "chunks": [ 00:53:01.977 { 00:53:01.977 "id": 0, 00:53:01.977 "state": "INACTIVE", 00:53:01.977 "utilization": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 1, 00:53:01.977 "state": "OPEN", 00:53:01.977 "utilization": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 2, 00:53:01.977 "state": "OPEN", 00:53:01.977 "utilization": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 3, 00:53:01.977 "state": "FREE", 00:53:01.977 "utilization": 0.0 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "id": 4, 00:53:01.977 "state": "FREE", 00:53:01.977 "utilization": 0.0 00:53:01.977 } 00:53:01.977 ], 00:53:01.977 "read-only": true 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "name": "verbose_mode", 00:53:01.977 "value": true, 00:53:01.977 "unit": "", 00:53:01.977 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:53:01.977 }, 00:53:01.977 { 00:53:01.977 "name": "prep_upgrade_on_shutdown", 00:53:01.977 "value": false, 00:53:01.977 "unit": "", 00:53:01.977 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:53:01.977 } 00:53:01.977 ] 00:53:01.977 } 00:53:01.977 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:53:01.977 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:53:01.977 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:53:02.236 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:53:02.236 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:53:02.236 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:53:02.236 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:53:02.236 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:02.495 Validate MD5 checksum, iteration 1 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:02.495 13:47:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:02.768 [2024-12-06 13:47:55.640898] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:53:02.768 [2024-12-06 13:47:55.641047] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84771 ] 00:53:02.768 [2024-12-06 13:47:55.830248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:03.067 [2024-12-06 13:47:56.020777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:05.023  [2024-12-06T13:47:58.381Z] Copying: 671/1024 [MB] (671 MBps) [2024-12-06T13:48:00.283Z] Copying: 1024/1024 [MB] (average 652 MBps) 00:53:07.183 00:53:07.183 13:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:53:07.183 13:47:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:09.104 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:53:09.105 Validate MD5 checksum, iteration 2 00:53:09.105 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c1afbe6ac8ea8df79649a1d2e04d37f7 00:53:09.105 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c1afbe6ac8ea8df79649a1d2e04d37f7 != \c\1\a\f\b\e\6\a\c\8\e\a\8\d\f\7\9\6\4\9\a\1\d\2\e\0\4\d\3\7\f\7 ]] 00:53:09.105 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:53:09.105 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:09.105 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:53:09.105 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:09.105 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:09.105 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:09.105 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:09.105 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:09.105 13:48:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:09.105 [2024-12-06 13:48:01.956668] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:53:09.105 [2024-12-06 13:48:01.956862] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84838 ] 00:53:09.105 [2024-12-06 13:48:02.155424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:09.364 [2024-12-06 13:48:02.337750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:11.274  [2024-12-06T13:48:04.938Z] Copying: 660/1024 [MB] (660 MBps) [2024-12-06T13:48:06.312Z] Copying: 1024/1024 [MB] (average 652 MBps) 00:53:13.212 00:53:13.212 13:48:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:53:13.212 13:48:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:15.114 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:53:15.114 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c225590b15e4b8ba2e03e0861a8963e7 00:53:15.114 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c225590b15e4b8ba2e03e0861a8963e7 != \c\2\2\5\5\9\0\b\1\5\e\4\b\8\b\a\2\e\0\3\e\0\8\6\1\a\8\9\6\3\e\7 ]] 00:53:15.114 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:53:15.114 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:15.114 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:53:15.114 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84695 ]] 00:53:15.114 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84695 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84906 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84906 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84906 ']' 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:15.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:15.115 13:48:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:53:15.115 [2024-12-06 13:48:08.121969] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:53:15.115 [2024-12-06 13:48:08.122168] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84906 ] 00:53:15.375 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84695 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:53:15.375 [2024-12-06 13:48:08.303473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:15.375 [2024-12-06 13:48:08.437851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:16.756 [2024-12-06 13:48:09.548934] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:53:16.756 [2024-12-06 13:48:09.549027] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:53:16.756 [2024-12-06 13:48:09.696876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.756 [2024-12-06 13:48:09.696929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:53:16.756 [2024-12-06 13:48:09.696945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:16.756 [2024-12-06 13:48:09.696972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.756 [2024-12-06 13:48:09.697030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.756 [2024-12-06 13:48:09.697043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:53:16.756 [2024-12-06 13:48:09.697054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:53:16.756 [2024-12-06 13:48:09.697064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.756 [2024-12-06 13:48:09.697094] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:53:16.756 [2024-12-06 13:48:09.698139] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:53:16.756 [2024-12-06 13:48:09.698176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.756 [2024-12-06 13:48:09.698188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:53:16.756 [2024-12-06 13:48:09.698200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.093 ms 00:53:16.756 [2024-12-06 13:48:09.698211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.756 [2024-12-06 13:48:09.698613] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:53:16.757 [2024-12-06 13:48:09.724745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.757 [2024-12-06 13:48:09.724801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:53:16.757 [2024-12-06 13:48:09.724834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.132 ms 00:53:16.757 [2024-12-06 13:48:09.724846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.757 [2024-12-06 13:48:09.739288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.757 [2024-12-06 13:48:09.739326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:53:16.757 [2024-12-06 13:48:09.739339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:53:16.757 [2024-12-06 13:48:09.739349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.757 [2024-12-06 13:48:09.739885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.757 [2024-12-06 13:48:09.739908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:53:16.757 [2024-12-06 13:48:09.739920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:53:16.757 [2024-12-06 13:48:09.739931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.757 [2024-12-06 13:48:09.740001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.757 [2024-12-06 13:48:09.740016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:53:16.757 [2024-12-06 13:48:09.740027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:53:16.757 [2024-12-06 13:48:09.740038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.757 [2024-12-06 13:48:09.740065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.757 [2024-12-06 13:48:09.740076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:53:16.757 [2024-12-06 13:48:09.740088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:16.757 [2024-12-06 13:48:09.740097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.757 [2024-12-06 13:48:09.740123] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:53:16.757 [2024-12-06 13:48:09.744191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.757 [2024-12-06 13:48:09.744222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:53:16.757 [2024-12-06 13:48:09.744234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.073 ms 00:53:16.757 [2024-12-06 13:48:09.744261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.757 [2024-12-06 13:48:09.744298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.757 [2024-12-06 13:48:09.744309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:53:16.757 [2024-12-06 13:48:09.744321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:53:16.757 [2024-12-06 13:48:09.744331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.757 [2024-12-06 13:48:09.744369] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:53:16.757 [2024-12-06 13:48:09.744397] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:53:16.757 [2024-12-06 13:48:09.744443] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:53:16.757 [2024-12-06 13:48:09.744468] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:53:16.757 [2024-12-06 13:48:09.744563] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:53:16.757 [2024-12-06 13:48:09.744577] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:53:16.757 [2024-12-06 13:48:09.744590] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:53:16.757 [2024-12-06 13:48:09.744603] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:53:16.757 [2024-12-06 13:48:09.744615] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:53:16.757 [2024-12-06 13:48:09.744627] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:53:16.757 [2024-12-06 13:48:09.744638] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:53:16.757 [2024-12-06 13:48:09.744648] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:53:16.757 [2024-12-06 13:48:09.744658] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:53:16.757 [2024-12-06 13:48:09.744674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.757 [2024-12-06 13:48:09.744684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:53:16.757 [2024-12-06 13:48:09.744695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:53:16.757 [2024-12-06 13:48:09.744705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.757 [2024-12-06 13:48:09.744779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.757 [2024-12-06 13:48:09.744790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:53:16.757 [2024-12-06 13:48:09.744801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:53:16.757 [2024-12-06 13:48:09.744811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.757 [2024-12-06 13:48:09.744903] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:53:16.757 [2024-12-06 13:48:09.744928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:53:16.757 [2024-12-06 13:48:09.744940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:53:16.757 [2024-12-06 13:48:09.744952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:16.757 [2024-12-06 13:48:09.744963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:53:16.757 [2024-12-06 13:48:09.744972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:53:16.757 [2024-12-06 13:48:09.744982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:53:16.757 [2024-12-06 13:48:09.744992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:53:16.757 [2024-12-06 13:48:09.745001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:53:16.757 [2024-12-06 13:48:09.745011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:16.757 [2024-12-06 13:48:09.745023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:53:16.757 [2024-12-06 13:48:09.745033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:53:16.757 [2024-12-06 13:48:09.745042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:16.757 [2024-12-06 13:48:09.745052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:53:16.757 [2024-12-06 13:48:09.745061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:53:16.757 [2024-12-06 13:48:09.745071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:16.757 [2024-12-06 13:48:09.745081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:53:16.757 [2024-12-06 13:48:09.745090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:53:16.757 [2024-12-06 13:48:09.745099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:16.757 [2024-12-06 13:48:09.745110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:53:16.757 [2024-12-06 13:48:09.745119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:53:16.757 [2024-12-06 13:48:09.745140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:16.757 [2024-12-06 13:48:09.745150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:53:16.757 [2024-12-06 13:48:09.745160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:53:16.757 [2024-12-06 13:48:09.745169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:16.757 [2024-12-06 13:48:09.745179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:53:16.757 [2024-12-06 13:48:09.745188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:53:16.757 [2024-12-06 13:48:09.745198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:16.757 [2024-12-06 13:48:09.745207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:53:16.757 [2024-12-06 13:48:09.745216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:53:16.757 [2024-12-06 13:48:09.745226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:53:16.757 [2024-12-06 13:48:09.745236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:53:16.757 [2024-12-06 13:48:09.745246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:53:16.757 [2024-12-06 13:48:09.745256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:16.757 [2024-12-06 13:48:09.745265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:53:16.757 [2024-12-06 13:48:09.745275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:53:16.757 [2024-12-06 13:48:09.745284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:16.757 [2024-12-06 13:48:09.745293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:53:16.757 [2024-12-06 13:48:09.745303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:53:16.757 [2024-12-06 13:48:09.745312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:16.757 [2024-12-06 13:48:09.745322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:53:16.757 [2024-12-06 13:48:09.745331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:53:16.757 [2024-12-06 13:48:09.745342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:16.757 [2024-12-06 13:48:09.745352] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:53:16.757 [2024-12-06 13:48:09.745363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:53:16.757 [2024-12-06 13:48:09.745372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:53:16.757 [2024-12-06 13:48:09.745383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:53:16.757 [2024-12-06 13:48:09.745393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:53:16.757 [2024-12-06 13:48:09.745418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:53:16.757 [2024-12-06 13:48:09.745428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:53:16.757 [2024-12-06 13:48:09.745437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:53:16.757 [2024-12-06 13:48:09.745447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:53:16.757 [2024-12-06 13:48:09.745458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:53:16.757 [2024-12-06 13:48:09.745469] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:53:16.757 [2024-12-06 13:48:09.745483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:16.758 [2024-12-06 13:48:09.745494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:53:16.758 [2024-12-06 13:48:09.745506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:53:16.758 [2024-12-06 13:48:09.745517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:53:16.758 [2024-12-06 13:48:09.745528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:53:16.758 [2024-12-06 13:48:09.745539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:53:16.758 [2024-12-06 13:48:09.745550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:53:16.758 [2024-12-06 13:48:09.745560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:53:16.758 [2024-12-06 13:48:09.745571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:53:16.758 [2024-12-06 13:48:09.745582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:53:16.758 [2024-12-06 13:48:09.745592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:53:16.758 [2024-12-06 13:48:09.745603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:53:16.758 [2024-12-06 13:48:09.745612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:53:16.758 [2024-12-06 13:48:09.745623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:53:16.758 [2024-12-06 13:48:09.745634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:53:16.758 [2024-12-06 13:48:09.745645] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:53:16.758 [2024-12-06 13:48:09.745656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:16.758 [2024-12-06 13:48:09.745673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:53:16.758 [2024-12-06 13:48:09.745684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:53:16.758 [2024-12-06 13:48:09.745695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:53:16.758 [2024-12-06 13:48:09.745710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:53:16.758 [2024-12-06 13:48:09.745722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.758 [2024-12-06 13:48:09.745733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:53:16.758 [2024-12-06 13:48:09.745743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:53:16.758 [2024-12-06 13:48:09.745754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.758 [2024-12-06 13:48:09.790268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.758 [2024-12-06 13:48:09.790314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:53:16.758 [2024-12-06 13:48:09.790329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.458 ms 00:53:16.758 [2024-12-06 13:48:09.790356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.758 [2024-12-06 13:48:09.790403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.758 [2024-12-06 13:48:09.790424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:53:16.758 [2024-12-06 13:48:09.790437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:53:16.758 [2024-12-06 13:48:09.790447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.758 [2024-12-06 13:48:09.846495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.758 [2024-12-06 13:48:09.846537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:53:16.758 [2024-12-06 13:48:09.846552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.954 ms 00:53:16.758 [2024-12-06 13:48:09.846565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.758 [2024-12-06 13:48:09.846618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.758 [2024-12-06 13:48:09.846631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:53:16.758 [2024-12-06 13:48:09.846642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:53:16.758 [2024-12-06 13:48:09.846659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.758 [2024-12-06 13:48:09.846791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.758 [2024-12-06 13:48:09.846805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:53:16.758 [2024-12-06 13:48:09.846816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:53:16.758 [2024-12-06 13:48:09.846827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:16.758 [2024-12-06 13:48:09.846875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:16.758 [2024-12-06 13:48:09.846888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:53:16.758 [2024-12-06 13:48:09.846899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:53:16.758 [2024-12-06 13:48:09.846910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.017 [2024-12-06 13:48:09.873809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.017 [2024-12-06 13:48:09.873853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:53:17.017 [2024-12-06 13:48:09.873869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.870 ms 00:53:17.017 [2024-12-06 13:48:09.873902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.017 [2024-12-06 13:48:09.874053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.017 [2024-12-06 13:48:09.874070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:53:17.017 [2024-12-06 13:48:09.874082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:53:17.017 [2024-12-06 13:48:09.874093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.017 [2024-12-06 13:48:09.919383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.017 [2024-12-06 13:48:09.919461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:53:17.017 [2024-12-06 13:48:09.919477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.263 ms 00:53:17.017 [2024-12-06 13:48:09.919490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.017 [2024-12-06 13:48:09.935687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.017 [2024-12-06 13:48:09.935724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:53:17.017 [2024-12-06 13:48:09.935750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.697 ms 00:53:17.017 [2024-12-06 13:48:09.935762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.017 [2024-12-06 13:48:10.035561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.017 [2024-12-06 13:48:10.035659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:53:17.017 [2024-12-06 13:48:10.035680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.716 ms 00:53:17.017 [2024-12-06 13:48:10.035692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.017 [2024-12-06 13:48:10.036167] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:53:17.017 [2024-12-06 13:48:10.036600] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:53:17.017 [2024-12-06 13:48:10.037022] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:53:17.017 [2024-12-06 13:48:10.037406] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:53:17.018 [2024-12-06 13:48:10.037423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.018 [2024-12-06 13:48:10.037436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:53:17.018 [2024-12-06 13:48:10.037447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.656 ms 00:53:17.018 [2024-12-06 13:48:10.037458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.018 [2024-12-06 13:48:10.037590] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:53:17.018 [2024-12-06 13:48:10.037606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.018 [2024-12-06 13:48:10.037622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:53:17.018 [2024-12-06 13:48:10.037634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:53:17.018 [2024-12-06 13:48:10.037645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.018 [2024-12-06 13:48:10.062438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.018 [2024-12-06 13:48:10.062491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:53:17.018 [2024-12-06 13:48:10.062508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.767 ms 00:53:17.018 [2024-12-06 13:48:10.062520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.018 [2024-12-06 13:48:10.077368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.018 [2024-12-06 13:48:10.077416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:53:17.018 [2024-12-06 13:48:10.077430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:53:17.018 [2024-12-06 13:48:10.077441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.018 [2024-12-06 13:48:10.077551] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:53:17.018 [2024-12-06 13:48:10.077891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.018 [2024-12-06 13:48:10.077902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:53:17.018 [2024-12-06 13:48:10.077913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.342 ms 00:53:17.018 [2024-12-06 13:48:10.077923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.587 [2024-12-06 13:48:10.660177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.587 [2024-12-06 13:48:10.660256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:53:17.587 [2024-12-06 13:48:10.660276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 581.020 ms 00:53:17.587 [2024-12-06 13:48:10.660289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.587 [2024-12-06 13:48:10.666458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.587 [2024-12-06 13:48:10.666497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:53:17.587 [2024-12-06 13:48:10.666510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.233 ms 00:53:17.587 [2024-12-06 13:48:10.666523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.587 [2024-12-06 13:48:10.667016] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:53:17.587 [2024-12-06 13:48:10.667043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.587 [2024-12-06 13:48:10.667055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:53:17.587 [2024-12-06 13:48:10.667067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.480 ms 00:53:17.587 [2024-12-06 13:48:10.667078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.587 [2024-12-06 13:48:10.667147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.587 [2024-12-06 13:48:10.667161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:53:17.587 [2024-12-06 13:48:10.667173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:17.587 [2024-12-06 13:48:10.667191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:17.587 [2024-12-06 13:48:10.667228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 589.677 ms, result 0 00:53:17.587 [2024-12-06 13:48:10.667276] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:53:17.587 [2024-12-06 13:48:10.667471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:17.587 [2024-12-06 13:48:10.667484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:53:17.587 [2024-12-06 13:48:10.667495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:53:17.587 [2024-12-06 13:48:10.667505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.155 [2024-12-06 13:48:11.249296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.155 [2024-12-06 13:48:11.249368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:53:18.155 [2024-12-06 13:48:11.249420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 580.486 ms 00:53:18.156 [2024-12-06 13:48:11.249433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.415 [2024-12-06 13:48:11.255630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.415 [2024-12-06 13:48:11.255667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:53:18.415 [2024-12-06 13:48:11.255682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.201 ms 00:53:18.415 [2024-12-06 13:48:11.255693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.415 [2024-12-06 13:48:11.256195] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:53:18.415 [2024-12-06 13:48:11.256223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.415 [2024-12-06 13:48:11.256235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:53:18.415 [2024-12-06 13:48:11.256248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.498 ms 00:53:18.415 [2024-12-06 13:48:11.256260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.415 [2024-12-06 13:48:11.256293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.415 [2024-12-06 13:48:11.256306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:53:18.415 [2024-12-06 13:48:11.256318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:53:18.415 [2024-12-06 13:48:11.256329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.415 [2024-12-06 13:48:11.256373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 589.090 ms, result 0 00:53:18.415 [2024-12-06 13:48:11.256438] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:53:18.415 [2024-12-06 13:48:11.256455] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:53:18.415 [2024-12-06 13:48:11.256470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.415 [2024-12-06 13:48:11.256483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:53:18.415 [2024-12-06 13:48:11.256496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1178.942 ms 00:53:18.415 [2024-12-06 13:48:11.256508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.415 [2024-12-06 13:48:11.256545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.415 [2024-12-06 13:48:11.256565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:53:18.415 [2024-12-06 13:48:11.256578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:53:18.415 [2024-12-06 13:48:11.256600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.415 [2024-12-06 13:48:11.269926] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:53:18.415 [2024-12-06 13:48:11.270069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.415 [2024-12-06 13:48:11.270083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:53:18.415 [2024-12-06 13:48:11.270095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.451 ms 00:53:18.415 [2024-12-06 13:48:11.270106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.415 [2024-12-06 13:48:11.270758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.415 [2024-12-06 13:48:11.270782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:53:18.415 [2024-12-06 13:48:11.270799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.559 ms 00:53:18.416 [2024-12-06 13:48:11.270810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.416 [2024-12-06 13:48:11.272949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.416 [2024-12-06 13:48:11.272971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:53:18.416 [2024-12-06 13:48:11.272983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.118 ms 00:53:18.416 [2024-12-06 13:48:11.272993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.416 [2024-12-06 13:48:11.273037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.416 [2024-12-06 13:48:11.273049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:53:18.416 [2024-12-06 13:48:11.273061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:53:18.416 [2024-12-06 13:48:11.273077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.416 [2024-12-06 13:48:11.273190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.416 [2024-12-06 13:48:11.273202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:53:18.416 [2024-12-06 13:48:11.273213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:53:18.416 [2024-12-06 13:48:11.273224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.416 [2024-12-06 13:48:11.273248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.416 [2024-12-06 13:48:11.273259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:53:18.416 [2024-12-06 13:48:11.273269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:53:18.416 [2024-12-06 13:48:11.273279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.416 [2024-12-06 13:48:11.273324] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:53:18.416 [2024-12-06 13:48:11.273337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.416 [2024-12-06 13:48:11.273348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:53:18.416 [2024-12-06 13:48:11.273358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:53:18.416 [2024-12-06 13:48:11.273368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.416 [2024-12-06 13:48:11.273433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:18.416 [2024-12-06 13:48:11.273446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:53:18.416 [2024-12-06 13:48:11.273457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:53:18.416 [2024-12-06 13:48:11.273468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:18.416 [2024-12-06 13:48:11.274881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1577.419 ms, result 0 00:53:18.416 [2024-12-06 13:48:11.290471] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:18.416 [2024-12-06 13:48:11.306483] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:53:18.416 [2024-12-06 13:48:11.317313] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:18.416 Validate MD5 checksum, iteration 1 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:18.416 13:48:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:53:18.416 [2024-12-06 13:48:11.445616] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:53:18.416 [2024-12-06 13:48:11.445752] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84951 ] 00:53:18.676 [2024-12-06 13:48:11.634714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:18.936 [2024-12-06 13:48:11.831992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:20.844  [2024-12-06T13:48:14.203Z] Copying: 660/1024 [MB] (660 MBps) [2024-12-06T13:48:18.408Z] Copying: 1024/1024 [MB] (average 661 MBps) 00:53:25.308 00:53:25.308 13:48:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:53:25.308 13:48:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c1afbe6ac8ea8df79649a1d2e04d37f7 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c1afbe6ac8ea8df79649a1d2e04d37f7 != \c\1\a\f\b\e\6\a\c\8\e\a\8\d\f\7\9\6\4\9\a\1\d\2\e\0\4\d\3\7\f\7 ]] 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:26.713 Validate MD5 checksum, iteration 2 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:53:26.713 13:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:53:26.713 [2024-12-06 13:48:19.779319] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:53:26.713 [2024-12-06 13:48:19.779502] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85036 ] 00:53:26.972 [2024-12-06 13:48:19.967159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:27.232 [2024-12-06 13:48:20.146896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:29.134  [2024-12-06T13:48:22.492Z] Copying: 676/1024 [MB] (676 MBps) [2024-12-06T13:48:24.390Z] Copying: 1024/1024 [MB] (average 662 MBps) 00:53:31.290 00:53:31.290 13:48:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:53:31.290 13:48:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:33.189 13:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:53:33.189 13:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c225590b15e4b8ba2e03e0861a8963e7 00:53:33.189 13:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c225590b15e4b8ba2e03e0861a8963e7 != \c\2\2\5\5\9\0\b\1\5\e\4\b\8\b\a\2\e\0\3\e\0\8\6\1\a\8\9\6\3\e\7 ]] 00:53:33.189 13:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:53:33.189 13:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:53:33.189 13:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:53:33.189 13:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:53:33.189 13:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:53:33.189 13:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84906 ]] 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84906 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84906 ']' 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84906 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84906 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:33.190 killing process with pid 84906 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84906' 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84906 00:53:33.190 13:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84906 00:53:34.566 [2024-12-06 13:48:27.280803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:53:34.566 [2024-12-06 13:48:27.301916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.566 [2024-12-06 13:48:27.301958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:53:34.566 [2024-12-06 13:48:27.301991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:53:34.566 [2024-12-06 13:48:27.302002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.566 [2024-12-06 13:48:27.302027] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:53:34.566 [2024-12-06 13:48:27.306830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.566 [2024-12-06 13:48:27.306880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:53:34.566 [2024-12-06 13:48:27.306893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.786 ms 00:53:34.566 [2024-12-06 13:48:27.306920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.566 [2024-12-06 13:48:27.307138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.566 [2024-12-06 13:48:27.307152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:53:34.566 [2024-12-06 13:48:27.307164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:53:34.567 [2024-12-06 13:48:27.307174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.308487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.308651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:53:34.567 [2024-12-06 13:48:27.308674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.295 ms 00:53:34.567 [2024-12-06 13:48:27.308691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.309666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.309695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:53:34.567 [2024-12-06 13:48:27.309707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.933 ms 00:53:34.567 [2024-12-06 13:48:27.309718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.325927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.325963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:53:34.567 [2024-12-06 13:48:27.325983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.170 ms 00:53:34.567 [2024-12-06 13:48:27.325993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.334000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.334036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:53:34.567 [2024-12-06 13:48:27.334049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.969 ms 00:53:34.567 [2024-12-06 13:48:27.334060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.334147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.334160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:53:34.567 [2024-12-06 13:48:27.334171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:53:34.567 [2024-12-06 13:48:27.334187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.348891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.348923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:53:34.567 [2024-12-06 13:48:27.348936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.686 ms 00:53:34.567 [2024-12-06 13:48:27.348946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.363567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.363724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:53:34.567 [2024-12-06 13:48:27.363745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.587 ms 00:53:34.567 [2024-12-06 13:48:27.363754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.378239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.378273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:53:34.567 [2024-12-06 13:48:27.378285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.447 ms 00:53:34.567 [2024-12-06 13:48:27.378294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.393079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.393235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:53:34.567 [2024-12-06 13:48:27.393256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.711 ms 00:53:34.567 [2024-12-06 13:48:27.393268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.393306] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:53:34.567 [2024-12-06 13:48:27.393324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:53:34.567 [2024-12-06 13:48:27.393337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:53:34.567 [2024-12-06 13:48:27.393348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:53:34.567 [2024-12-06 13:48:27.393361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:53:34.567 [2024-12-06 13:48:27.393543] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:53:34.567 [2024-12-06 13:48:27.393553] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: bde2c8f6-55e5-4ae9-a512-4041382f02c9 00:53:34.567 [2024-12-06 13:48:27.393565] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:53:34.567 [2024-12-06 13:48:27.393575] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:53:34.567 [2024-12-06 13:48:27.393585] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:53:34.567 [2024-12-06 13:48:27.393595] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:53:34.567 [2024-12-06 13:48:27.393606] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:53:34.567 [2024-12-06 13:48:27.393617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:53:34.567 [2024-12-06 13:48:27.393633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:53:34.567 [2024-12-06 13:48:27.393643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:53:34.567 [2024-12-06 13:48:27.393653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:53:34.567 [2024-12-06 13:48:27.393663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.393679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:53:34.567 [2024-12-06 13:48:27.393692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.359 ms 00:53:34.567 [2024-12-06 13:48:27.393703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.414798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.414937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:53:34.567 [2024-12-06 13:48:27.415097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.060 ms 00:53:34.567 [2024-12-06 13:48:27.415137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.415777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:53:34.567 [2024-12-06 13:48:27.415880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:53:34.567 [2024-12-06 13:48:27.415956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.587 ms 00:53:34.567 [2024-12-06 13:48:27.415994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.485017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.567 [2024-12-06 13:48:27.485177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:53:34.567 [2024-12-06 13:48:27.485349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.567 [2024-12-06 13:48:27.485419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.485482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.567 [2024-12-06 13:48:27.485628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:53:34.567 [2024-12-06 13:48:27.485703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.567 [2024-12-06 13:48:27.485735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.485862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.567 [2024-12-06 13:48:27.485963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:53:34.567 [2024-12-06 13:48:27.486001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.567 [2024-12-06 13:48:27.486033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.486086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.567 [2024-12-06 13:48:27.486119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:53:34.567 [2024-12-06 13:48:27.486215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.567 [2024-12-06 13:48:27.486246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.567 [2024-12-06 13:48:27.619743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.567 [2024-12-06 13:48:27.620039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:53:34.567 [2024-12-06 13:48:27.620224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.567 [2024-12-06 13:48:27.620264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.827 [2024-12-06 13:48:27.726219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.827 [2024-12-06 13:48:27.726493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:53:34.827 [2024-12-06 13:48:27.726613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.827 [2024-12-06 13:48:27.726652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.827 [2024-12-06 13:48:27.726816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.827 [2024-12-06 13:48:27.726854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:53:34.827 [2024-12-06 13:48:27.726947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.827 [2024-12-06 13:48:27.726984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.827 [2024-12-06 13:48:27.727086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.827 [2024-12-06 13:48:27.727248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:53:34.827 [2024-12-06 13:48:27.727282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.827 [2024-12-06 13:48:27.727364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.827 [2024-12-06 13:48:27.727562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.827 [2024-12-06 13:48:27.727741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:53:34.827 [2024-12-06 13:48:27.727780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.827 [2024-12-06 13:48:27.727812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.827 [2024-12-06 13:48:27.727889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.827 [2024-12-06 13:48:27.728083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:53:34.827 [2024-12-06 13:48:27.728128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.827 [2024-12-06 13:48:27.728159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.827 [2024-12-06 13:48:27.728232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.827 [2024-12-06 13:48:27.728315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:53:34.827 [2024-12-06 13:48:27.728351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.827 [2024-12-06 13:48:27.728381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.827 [2024-12-06 13:48:27.728476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:53:34.827 [2024-12-06 13:48:27.728569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:53:34.827 [2024-12-06 13:48:27.728606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:53:34.827 [2024-12-06 13:48:27.728637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:53:34.827 [2024-12-06 13:48:27.728818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 426.854 ms, result 0 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:53:36.206 Remove shared memory files 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84695 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:53:36.206 ************************************ 00:53:36.206 END TEST ftl_upgrade_shutdown 00:53:36.206 ************************************ 00:53:36.206 00:53:36.206 real 1m35.443s 00:53:36.206 user 2m9.790s 00:53:36.206 sys 0m26.081s 00:53:36.206 13:48:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:36.207 13:48:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:53:36.207 13:48:29 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:53:36.207 13:48:29 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:53:36.207 13:48:29 ftl -- ftl/ftl.sh@14 -- # killprocess 77708 00:53:36.207 13:48:29 ftl -- common/autotest_common.sh@954 -- # '[' -z 77708 ']' 00:53:36.207 13:48:29 ftl -- common/autotest_common.sh@958 -- # kill -0 77708 00:53:36.207 Process with pid 77708 is not found 00:53:36.207 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77708) - No such process 00:53:36.207 13:48:29 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77708 is not found' 00:53:36.207 13:48:29 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:53:36.207 13:48:29 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85161 00:53:36.207 13:48:29 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:53:36.207 13:48:29 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85161 00:53:36.207 13:48:29 ftl -- common/autotest_common.sh@835 -- # '[' -z 85161 ']' 00:53:36.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:36.207 13:48:29 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:36.207 13:48:29 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:36.207 13:48:29 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:36.207 13:48:29 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:36.207 13:48:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:53:36.465 [2024-12-06 13:48:29.369857] Starting SPDK v25.01-pre git sha1 88d8055fc / DPDK 24.03.0 initialization... 00:53:36.466 [2024-12-06 13:48:29.370059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85161 ] 00:53:36.725 [2024-12-06 13:48:29.563942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:36.725 [2024-12-06 13:48:29.706187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:37.661 13:48:30 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:37.661 13:48:30 ftl -- common/autotest_common.sh@868 -- # return 0 00:53:37.661 13:48:30 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:53:38.228 nvme0n1 00:53:38.228 13:48:31 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:53:38.228 13:48:31 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:53:38.228 13:48:31 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:53:38.228 13:48:31 ftl -- ftl/common.sh@28 -- # stores=ccf820a2-3a25-40c9-8280-a3d7d754b9dc 00:53:38.228 13:48:31 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:53:38.228 13:48:31 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ccf820a2-3a25-40c9-8280-a3d7d754b9dc 00:53:38.488 13:48:31 ftl -- ftl/ftl.sh@23 -- # killprocess 85161 00:53:38.488 13:48:31 ftl -- common/autotest_common.sh@954 -- # '[' -z 85161 ']' 00:53:38.488 13:48:31 ftl -- common/autotest_common.sh@958 -- # kill -0 85161 00:53:38.488 13:48:31 ftl -- common/autotest_common.sh@959 -- # uname 00:53:38.488 13:48:31 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:38.488 13:48:31 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85161 00:53:38.488 killing process with pid 85161 00:53:38.488 13:48:31 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:38.488 13:48:31 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:38.488 13:48:31 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85161' 00:53:38.488 13:48:31 ftl -- common/autotest_common.sh@973 -- # kill 85161 00:53:38.488 13:48:31 ftl -- common/autotest_common.sh@978 -- # wait 85161 00:53:41.770 13:48:34 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:53:41.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:41.770 Waiting for block devices as requested 00:53:41.770 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:53:41.770 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:53:41.770 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:53:42.029 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:53:47.302 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:53:47.302 Remove shared memory files 00:53:47.302 13:48:40 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:53:47.302 13:48:40 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:53:47.302 13:48:40 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:53:47.302 13:48:40 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:53:47.303 13:48:40 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:53:47.303 13:48:40 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:53:47.303 13:48:40 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:53:47.303 ************************************ 00:53:47.303 END TEST ftl 00:53:47.303 ************************************ 00:53:47.303 00:53:47.303 real 11m4.070s 00:53:47.303 user 13m36.677s 00:53:47.303 sys 1m41.745s 00:53:47.303 13:48:40 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:47.303 13:48:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:53:47.303 13:48:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:53:47.303 13:48:40 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:53:47.303 13:48:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:53:47.303 13:48:40 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:53:47.303 13:48:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:53:47.303 13:48:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:53:47.303 13:48:40 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:53:47.303 13:48:40 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:53:47.303 13:48:40 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:53:47.303 13:48:40 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:53:47.303 13:48:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:47.303 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:53:47.303 13:48:40 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:53:47.303 13:48:40 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:53:47.303 13:48:40 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:53:47.303 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:53:49.840 INFO: APP EXITING 00:53:49.840 INFO: killing all VMs 00:53:49.840 INFO: killing vhost app 00:53:49.840 INFO: EXIT DONE 00:53:49.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:50.406 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:53:50.406 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:53:50.406 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:53:50.406 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:53:50.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:53:51.229 Cleaning 00:53:51.229 Removing: /var/run/dpdk/spdk0/config 00:53:51.229 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:53:51.229 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:53:51.229 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:53:51.229 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:53:51.229 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:53:51.229 Removing: /var/run/dpdk/spdk0/hugepage_info 00:53:51.229 Removing: /var/run/dpdk/spdk0 00:53:51.229 Removing: /var/run/dpdk/spdk_pid57887 00:53:51.229 Removing: /var/run/dpdk/spdk_pid58150 00:53:51.229 Removing: /var/run/dpdk/spdk_pid58394 00:53:51.229 Removing: /var/run/dpdk/spdk_pid58505 00:53:51.229 Removing: /var/run/dpdk/spdk_pid58572 00:53:51.229 Removing: /var/run/dpdk/spdk_pid58711 00:53:51.229 Removing: /var/run/dpdk/spdk_pid58740 00:53:51.229 Removing: /var/run/dpdk/spdk_pid58961 00:53:51.229 Removing: /var/run/dpdk/spdk_pid59079 00:53:51.229 Removing: /var/run/dpdk/spdk_pid59203 00:53:51.229 Removing: /var/run/dpdk/spdk_pid59340 00:53:51.229 Removing: /var/run/dpdk/spdk_pid59457 00:53:51.229 Removing: /var/run/dpdk/spdk_pid59502 00:53:51.229 Removing: /var/run/dpdk/spdk_pid59539 00:53:51.229 Removing: /var/run/dpdk/spdk_pid59615 00:53:51.229 Removing: /var/run/dpdk/spdk_pid59740 00:53:51.229 Removing: /var/run/dpdk/spdk_pid60217 00:53:51.229 Removing: /var/run/dpdk/spdk_pid60308 00:53:51.229 Removing: /var/run/dpdk/spdk_pid60399 00:53:51.229 Removing: /var/run/dpdk/spdk_pid60415 00:53:51.229 Removing: /var/run/dpdk/spdk_pid60596 00:53:51.229 Removing: /var/run/dpdk/spdk_pid60622 00:53:51.229 Removing: /var/run/dpdk/spdk_pid60799 00:53:51.229 Removing: /var/run/dpdk/spdk_pid60826 00:53:51.229 Removing: /var/run/dpdk/spdk_pid60901 00:53:51.229 Removing: /var/run/dpdk/spdk_pid60930 00:53:51.229 Removing: /var/run/dpdk/spdk_pid61005 00:53:51.229 Removing: /var/run/dpdk/spdk_pid61029 00:53:51.229 Removing: /var/run/dpdk/spdk_pid61246 00:53:51.229 Removing: /var/run/dpdk/spdk_pid61288 00:53:51.229 Removing: /var/run/dpdk/spdk_pid61371 00:53:51.229 Removing: /var/run/dpdk/spdk_pid61571 00:53:51.229 Removing: /var/run/dpdk/spdk_pid61677 00:53:51.486 Removing: /var/run/dpdk/spdk_pid61729 00:53:51.486 Removing: /var/run/dpdk/spdk_pid62211 00:53:51.486 Removing: /var/run/dpdk/spdk_pid62320 00:53:51.486 Removing: /var/run/dpdk/spdk_pid62435 00:53:51.486 Removing: /var/run/dpdk/spdk_pid62499 00:53:51.486 Removing: /var/run/dpdk/spdk_pid62530 00:53:51.486 Removing: /var/run/dpdk/spdk_pid62614 00:53:51.486 Removing: /var/run/dpdk/spdk_pid63269 00:53:51.486 Removing: /var/run/dpdk/spdk_pid63317 00:53:51.486 Removing: /var/run/dpdk/spdk_pid63851 00:53:51.486 Removing: /var/run/dpdk/spdk_pid63962 00:53:51.487 Removing: /var/run/dpdk/spdk_pid64077 00:53:51.487 Removing: /var/run/dpdk/spdk_pid64135 00:53:51.487 Removing: /var/run/dpdk/spdk_pid64166 00:53:51.487 Removing: /var/run/dpdk/spdk_pid64196 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66119 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66281 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66291 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66303 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66356 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66360 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66372 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66422 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66426 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66444 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66489 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66493 00:53:51.487 Removing: /var/run/dpdk/spdk_pid66511 00:53:51.487 Removing: /var/run/dpdk/spdk_pid67928 00:53:51.487 Removing: /var/run/dpdk/spdk_pid68053 00:53:51.487 Removing: /var/run/dpdk/spdk_pid69474 00:53:51.487 Removing: /var/run/dpdk/spdk_pid71220 00:53:51.487 Removing: /var/run/dpdk/spdk_pid71313 00:53:51.487 Removing: /var/run/dpdk/spdk_pid71394 00:53:51.487 Removing: /var/run/dpdk/spdk_pid71515 00:53:51.487 Removing: /var/run/dpdk/spdk_pid71611 00:53:51.487 Removing: /var/run/dpdk/spdk_pid71714 00:53:51.487 Removing: /var/run/dpdk/spdk_pid71810 00:53:51.487 Removing: /var/run/dpdk/spdk_pid71892 00:53:51.487 Removing: /var/run/dpdk/spdk_pid72004 00:53:51.487 Removing: /var/run/dpdk/spdk_pid72107 00:53:51.487 Removing: /var/run/dpdk/spdk_pid72218 00:53:51.487 Removing: /var/run/dpdk/spdk_pid72305 00:53:51.487 Removing: /var/run/dpdk/spdk_pid72388 00:53:51.487 Removing: /var/run/dpdk/spdk_pid72507 00:53:51.487 Removing: /var/run/dpdk/spdk_pid72604 00:53:51.487 Removing: /var/run/dpdk/spdk_pid72712 00:53:51.487 Removing: /var/run/dpdk/spdk_pid72798 00:53:51.487 Removing: /var/run/dpdk/spdk_pid72879 00:53:51.487 Removing: /var/run/dpdk/spdk_pid72996 00:53:51.487 Removing: /var/run/dpdk/spdk_pid73093 00:53:51.487 Removing: /var/run/dpdk/spdk_pid73200 00:53:51.487 Removing: /var/run/dpdk/spdk_pid73285 00:53:51.487 Removing: /var/run/dpdk/spdk_pid73365 00:53:51.487 Removing: /var/run/dpdk/spdk_pid73451 00:53:51.487 Removing: /var/run/dpdk/spdk_pid73531 00:53:51.487 Removing: /var/run/dpdk/spdk_pid73645 00:53:51.487 Removing: /var/run/dpdk/spdk_pid73742 00:53:51.487 Removing: /var/run/dpdk/spdk_pid73842 00:53:51.487 Removing: /var/run/dpdk/spdk_pid73933 00:53:51.487 Removing: /var/run/dpdk/spdk_pid74013 00:53:51.487 Removing: /var/run/dpdk/spdk_pid74093 00:53:51.487 Removing: /var/run/dpdk/spdk_pid74173 00:53:51.487 Removing: /var/run/dpdk/spdk_pid74282 00:53:51.487 Removing: /var/run/dpdk/spdk_pid74384 00:53:51.487 Removing: /var/run/dpdk/spdk_pid74539 00:53:51.487 Removing: /var/run/dpdk/spdk_pid74833 00:53:51.487 Removing: /var/run/dpdk/spdk_pid74882 00:53:51.487 Removing: /var/run/dpdk/spdk_pid75377 00:53:51.487 Removing: /var/run/dpdk/spdk_pid75562 00:53:51.487 Removing: /var/run/dpdk/spdk_pid75668 00:53:51.745 Removing: /var/run/dpdk/spdk_pid75782 00:53:51.745 Removing: /var/run/dpdk/spdk_pid75839 00:53:51.745 Removing: /var/run/dpdk/spdk_pid75870 00:53:51.745 Removing: /var/run/dpdk/spdk_pid76161 00:53:51.745 Removing: /var/run/dpdk/spdk_pid76240 00:53:51.745 Removing: /var/run/dpdk/spdk_pid76331 00:53:51.745 Removing: /var/run/dpdk/spdk_pid76768 00:53:51.745 Removing: /var/run/dpdk/spdk_pid76909 00:53:51.745 Removing: /var/run/dpdk/spdk_pid77708 00:53:51.745 Removing: /var/run/dpdk/spdk_pid77867 00:53:51.745 Removing: /var/run/dpdk/spdk_pid78070 00:53:51.745 Removing: /var/run/dpdk/spdk_pid78180 00:53:51.745 Removing: /var/run/dpdk/spdk_pid78500 00:53:51.745 Removing: /var/run/dpdk/spdk_pid78765 00:53:51.745 Removing: /var/run/dpdk/spdk_pid79122 00:53:51.745 Removing: /var/run/dpdk/spdk_pid79340 00:53:51.745 Removing: /var/run/dpdk/spdk_pid79461 00:53:51.745 Removing: /var/run/dpdk/spdk_pid79536 00:53:51.745 Removing: /var/run/dpdk/spdk_pid79670 00:53:51.745 Removing: /var/run/dpdk/spdk_pid79713 00:53:51.745 Removing: /var/run/dpdk/spdk_pid79792 00:53:51.745 Removing: /var/run/dpdk/spdk_pid79985 00:53:51.745 Removing: /var/run/dpdk/spdk_pid80228 00:53:51.745 Removing: /var/run/dpdk/spdk_pid80592 00:53:51.745 Removing: /var/run/dpdk/spdk_pid80980 00:53:51.745 Removing: /var/run/dpdk/spdk_pid81361 00:53:51.745 Removing: /var/run/dpdk/spdk_pid81815 00:53:51.745 Removing: /var/run/dpdk/spdk_pid81964 00:53:51.745 Removing: /var/run/dpdk/spdk_pid82058 00:53:51.745 Removing: /var/run/dpdk/spdk_pid82704 00:53:51.745 Removing: /var/run/dpdk/spdk_pid82779 00:53:51.745 Removing: /var/run/dpdk/spdk_pid83235 00:53:51.745 Removing: /var/run/dpdk/spdk_pid83600 00:53:51.745 Removing: /var/run/dpdk/spdk_pid84067 00:53:51.745 Removing: /var/run/dpdk/spdk_pid84210 00:53:51.745 Removing: /var/run/dpdk/spdk_pid84270 00:53:51.745 Removing: /var/run/dpdk/spdk_pid84334 00:53:51.745 Removing: /var/run/dpdk/spdk_pid84409 00:53:51.745 Removing: /var/run/dpdk/spdk_pid84483 00:53:51.745 Removing: /var/run/dpdk/spdk_pid84695 00:53:51.745 Removing: /var/run/dpdk/spdk_pid84771 00:53:51.745 Removing: /var/run/dpdk/spdk_pid84838 00:53:51.745 Removing: /var/run/dpdk/spdk_pid84906 00:53:51.745 Removing: /var/run/dpdk/spdk_pid84951 00:53:51.745 Removing: /var/run/dpdk/spdk_pid85036 00:53:51.745 Removing: /var/run/dpdk/spdk_pid85161 00:53:51.745 Clean 00:53:51.745 13:48:44 -- common/autotest_common.sh@1453 -- # return 0 00:53:51.745 13:48:44 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:53:51.745 13:48:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:51.745 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:53:52.003 13:48:44 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:53:52.003 13:48:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:52.003 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:53:52.003 13:48:44 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:53:52.003 13:48:44 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:53:52.003 13:48:44 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:53:52.003 13:48:44 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:53:52.003 13:48:44 -- spdk/autotest.sh@398 -- # hostname 00:53:52.003 13:48:44 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:53:52.261 geninfo: WARNING: invalid characters removed from testname! 00:54:18.845 13:49:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:19.138 13:49:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:21.675 13:49:14 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:23.577 13:49:16 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:26.118 13:49:18 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:28.024 13:49:21 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:54:30.555 13:49:23 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:54:30.555 13:49:23 -- spdk/autorun.sh@1 -- $ timing_finish 00:54:30.555 13:49:23 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:54:30.555 13:49:23 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:54:30.555 13:49:23 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:54:30.555 13:49:23 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:54:30.555 + [[ -n 5300 ]] 00:54:30.555 + sudo kill 5300 00:54:30.564 [Pipeline] } 00:54:30.579 [Pipeline] // timeout 00:54:30.584 [Pipeline] } 00:54:30.597 [Pipeline] // stage 00:54:30.602 [Pipeline] } 00:54:30.615 [Pipeline] // catchError 00:54:30.625 [Pipeline] stage 00:54:30.628 [Pipeline] { (Stop VM) 00:54:30.640 [Pipeline] sh 00:54:30.921 + vagrant halt 00:54:34.205 ==> default: Halting domain... 00:54:40.914 [Pipeline] sh 00:54:41.194 + vagrant destroy -f 00:54:44.482 ==> default: Removing domain... 00:54:44.493 [Pipeline] sh 00:54:44.789 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:54:44.798 [Pipeline] } 00:54:44.813 [Pipeline] // stage 00:54:44.819 [Pipeline] } 00:54:44.837 [Pipeline] // dir 00:54:44.843 [Pipeline] } 00:54:44.859 [Pipeline] // wrap 00:54:44.866 [Pipeline] } 00:54:44.879 [Pipeline] // catchError 00:54:44.889 [Pipeline] stage 00:54:44.891 [Pipeline] { (Epilogue) 00:54:44.905 [Pipeline] sh 00:54:45.189 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:54:50.479 [Pipeline] catchError 00:54:50.481 [Pipeline] { 00:54:50.494 [Pipeline] sh 00:54:50.777 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:54:51.035 Artifacts sizes are good 00:54:51.045 [Pipeline] } 00:54:51.060 [Pipeline] // catchError 00:54:51.072 [Pipeline] archiveArtifacts 00:54:51.079 Archiving artifacts 00:54:51.204 [Pipeline] cleanWs 00:54:51.254 [WS-CLEANUP] Deleting project workspace... 00:54:51.254 [WS-CLEANUP] Deferred wipeout is used... 00:54:51.260 [WS-CLEANUP] done 00:54:51.262 [Pipeline] } 00:54:51.277 [Pipeline] // stage 00:54:51.282 [Pipeline] } 00:54:51.296 [Pipeline] // node 00:54:51.301 [Pipeline] End of Pipeline 00:54:51.333 Finished: SUCCESS